Oct 08 06:34:15 crc systemd[1]: Starting Kubernetes Kubelet... Oct 08 06:34:15 crc restorecon[4739]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:15 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 08 06:34:16 crc restorecon[4739]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 08 06:34:17 crc kubenswrapper[4958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 06:34:17 crc kubenswrapper[4958]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 08 06:34:17 crc kubenswrapper[4958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 06:34:17 crc kubenswrapper[4958]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 06:34:17 crc kubenswrapper[4958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 08 06:34:17 crc kubenswrapper[4958]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.288464 4958 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296680 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296714 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296725 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296735 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296745 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296755 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296764 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296772 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296781 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296789 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296798 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296806 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296814 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296823 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296830 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296842 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296852 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296861 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296870 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296879 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296898 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296907 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296916 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296925 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296934 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296972 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296984 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.296996 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297006 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297016 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297025 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297033 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297041 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297069 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297077 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297085 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297093 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297101 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297111 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297119 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297127 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297136 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297143 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297151 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297159 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297169 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297179 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297187 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297195 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297203 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297210 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297219 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297226 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297235 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297243 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297252 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297259 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297269 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297277 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297285 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297293 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297301 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297309 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297317 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297329 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297340 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297348 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297357 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297365 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297372 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.297380 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298411 4958 flags.go:64] FLAG: --address="0.0.0.0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298435 4958 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298456 4958 flags.go:64] FLAG: --anonymous-auth="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298468 4958 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298480 4958 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298489 4958 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298501 4958 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298512 4958 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298521 4958 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298531 4958 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298540 4958 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298551 4958 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298561 4958 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298570 4958 flags.go:64] FLAG: --cgroup-root="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298579 4958 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298589 4958 flags.go:64] FLAG: --client-ca-file="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298598 4958 flags.go:64] FLAG: --cloud-config="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298607 4958 flags.go:64] FLAG: --cloud-provider="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298616 4958 flags.go:64] FLAG: --cluster-dns="[]" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298629 4958 flags.go:64] FLAG: --cluster-domain="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298638 4958 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298648 4958 flags.go:64] FLAG: --config-dir="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298657 4958 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298667 4958 flags.go:64] FLAG: --container-log-max-files="5" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298678 4958 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298688 4958 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298700 4958 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298710 4958 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298719 4958 flags.go:64] FLAG: --contention-profiling="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298728 4958 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298738 4958 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298748 4958 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298758 4958 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298769 4958 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298778 4958 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298786 4958 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298796 4958 flags.go:64] FLAG: --enable-load-reader="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298805 4958 flags.go:64] FLAG: --enable-server="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298814 4958 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298824 4958 flags.go:64] FLAG: --event-burst="100" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298833 4958 flags.go:64] FLAG: --event-qps="50" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298842 4958 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298851 4958 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298861 4958 flags.go:64] FLAG: --eviction-hard="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298871 4958 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298880 4958 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298889 4958 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298898 4958 flags.go:64] FLAG: --eviction-soft="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298908 4958 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298917 4958 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298926 4958 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298934 4958 flags.go:64] FLAG: --experimental-mounter-path="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298969 4958 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298979 4958 flags.go:64] FLAG: --fail-swap-on="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298989 4958 flags.go:64] FLAG: --feature-gates="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.298999 4958 flags.go:64] FLAG: --file-check-frequency="20s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299009 4958 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299019 4958 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299031 4958 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299042 4958 flags.go:64] FLAG: --healthz-port="10248" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299052 4958 flags.go:64] FLAG: --help="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299061 4958 flags.go:64] FLAG: --hostname-override="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299070 4958 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299079 4958 flags.go:64] FLAG: --http-check-frequency="20s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299088 4958 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299097 4958 flags.go:64] FLAG: --image-credential-provider-config="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299105 4958 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299114 4958 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299123 4958 flags.go:64] FLAG: --image-service-endpoint="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299133 4958 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299141 4958 flags.go:64] FLAG: --kube-api-burst="100" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299150 4958 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299160 4958 flags.go:64] FLAG: --kube-api-qps="50" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299169 4958 flags.go:64] FLAG: --kube-reserved="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299179 4958 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299188 4958 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299197 4958 flags.go:64] FLAG: --kubelet-cgroups="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299206 4958 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299215 4958 flags.go:64] FLAG: --lock-file="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299223 4958 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299233 4958 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299242 4958 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299255 4958 flags.go:64] FLAG: --log-json-split-stream="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299263 4958 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299272 4958 flags.go:64] FLAG: --log-text-split-stream="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299281 4958 flags.go:64] FLAG: --logging-format="text" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299290 4958 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299300 4958 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299309 4958 flags.go:64] FLAG: --manifest-url="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299317 4958 flags.go:64] FLAG: --manifest-url-header="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299331 4958 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299340 4958 flags.go:64] FLAG: --max-open-files="1000000" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299351 4958 flags.go:64] FLAG: --max-pods="110" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299360 4958 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299369 4958 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299379 4958 flags.go:64] FLAG: --memory-manager-policy="None" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299387 4958 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299396 4958 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299405 4958 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299414 4958 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299433 4958 flags.go:64] FLAG: --node-status-max-images="50" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299442 4958 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299452 4958 flags.go:64] FLAG: --oom-score-adj="-999" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299461 4958 flags.go:64] FLAG: --pod-cidr="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299470 4958 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299487 4958 flags.go:64] FLAG: --pod-manifest-path="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299495 4958 flags.go:64] FLAG: --pod-max-pids="-1" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299504 4958 flags.go:64] FLAG: --pods-per-core="0" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299513 4958 flags.go:64] FLAG: --port="10250" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299522 4958 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299531 4958 flags.go:64] FLAG: --provider-id="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299540 4958 flags.go:64] FLAG: --qos-reserved="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299549 4958 flags.go:64] FLAG: --read-only-port="10255" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299558 4958 flags.go:64] FLAG: --register-node="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299567 4958 flags.go:64] FLAG: --register-schedulable="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299576 4958 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299599 4958 flags.go:64] FLAG: --registry-burst="10" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299608 4958 flags.go:64] FLAG: --registry-qps="5" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299617 4958 flags.go:64] FLAG: --reserved-cpus="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299625 4958 flags.go:64] FLAG: --reserved-memory="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299636 4958 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299646 4958 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299655 4958 flags.go:64] FLAG: --rotate-certificates="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299665 4958 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299674 4958 flags.go:64] FLAG: --runonce="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299683 4958 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299692 4958 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299701 4958 flags.go:64] FLAG: --seccomp-default="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299710 4958 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299719 4958 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299729 4958 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299740 4958 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299750 4958 flags.go:64] FLAG: --storage-driver-password="root" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299760 4958 flags.go:64] FLAG: --storage-driver-secure="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299769 4958 flags.go:64] FLAG: --storage-driver-table="stats" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299779 4958 flags.go:64] FLAG: --storage-driver-user="root" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299789 4958 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299798 4958 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299807 4958 flags.go:64] FLAG: --system-cgroups="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299816 4958 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299830 4958 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299839 4958 flags.go:64] FLAG: --tls-cert-file="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299848 4958 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299859 4958 flags.go:64] FLAG: --tls-min-version="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299867 4958 flags.go:64] FLAG: --tls-private-key-file="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299876 4958 flags.go:64] FLAG: --topology-manager-policy="none" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299885 4958 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299894 4958 flags.go:64] FLAG: --topology-manager-scope="container" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299903 4958 flags.go:64] FLAG: --v="2" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299913 4958 flags.go:64] FLAG: --version="false" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299925 4958 flags.go:64] FLAG: --vmodule="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299935 4958 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.299972 4958 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300202 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300214 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300225 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300234 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300243 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300251 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300259 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300269 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300278 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300287 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300297 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300305 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300313 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300322 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300330 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300338 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300348 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300355 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300363 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300371 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300382 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300391 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300401 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300412 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300420 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300429 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300437 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300444 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300452 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300460 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300467 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300476 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300483 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300491 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300500 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300507 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300516 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300523 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300531 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300539 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300547 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300558 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300569 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300578 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300587 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300596 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300606 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300615 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300623 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300631 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300639 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300648 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300656 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300666 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300676 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300684 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300692 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300700 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300708 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300717 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300725 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300733 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300741 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300749 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300756 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300765 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300773 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300782 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300790 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300798 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.300806 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.300830 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.312200 4958 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.312248 4958 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312337 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312351 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312357 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312363 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312369 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312375 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312379 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312383 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312387 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312391 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312395 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312399 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312406 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312413 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312421 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312426 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312430 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312434 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312438 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312442 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312446 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312449 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312453 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312457 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312461 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312464 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312468 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312472 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312476 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312479 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312483 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312487 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312490 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312496 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312500 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312504 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312508 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312512 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312517 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312522 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312527 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312532 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312537 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312542 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312548 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312551 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312555 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312559 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312562 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312567 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312573 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312576 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312580 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312584 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312588 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312592 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312595 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312599 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312602 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312606 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312612 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312617 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312621 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312625 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312629 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312633 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312637 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312641 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312645 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312648 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312652 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.312660 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312844 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312853 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312858 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312862 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312866 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312870 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312873 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312877 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312881 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312884 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312888 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312891 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312895 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312899 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312904 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312910 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312915 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312918 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312923 4958 feature_gate.go:330] unrecognized feature gate: Example Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312927 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312931 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312935 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312939 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312945 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312960 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312964 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312968 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312972 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312975 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312980 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312985 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312989 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312993 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.312997 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313001 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313005 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313009 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313013 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313018 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313022 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313026 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313030 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313034 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313038 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313042 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313045 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313049 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313052 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313056 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313060 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313065 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313069 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313073 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313077 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313080 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313084 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313088 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313091 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313098 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313103 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313108 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313112 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313115 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313119 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313123 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313127 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313150 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313155 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313159 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313163 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.313167 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.313175 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.313421 4958 server.go:940] "Client rotation is on, will bootstrap in background" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.317526 4958 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.317641 4958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.319805 4958 server.go:997] "Starting client certificate rotation" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.319840 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.322189 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 00:14:50.211173493 +0000 UTC Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.322333 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2105h40m32.888845936s for next certificate rotation Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.352917 4958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.357201 4958 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.385928 4958 log.go:25] "Validated CRI v1 runtime API" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.426421 4958 log.go:25] "Validated CRI v1 image API" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.429042 4958 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.436308 4958 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-08-06-23-04-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.436352 4958 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.463292 4958 manager.go:217] Machine: {Timestamp:2025-10-08 06:34:17.459990029 +0000 UTC m=+0.589682700 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0c77a4e0-051d-4f88-8d47-c213d2d11d87 BootID:a960fe83-15f0-406b-ba19-f1536e6d71a9 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5f:59:f4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5f:59:f4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f6:66:85 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:40:1c:b6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:53:4f:a0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:47:a0:c1 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ec:1a:c3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:25:b2:26:86:b4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:7a:cd:5b:b2:da Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.463679 4958 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.463844 4958 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.464314 4958 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.464597 4958 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.464649 4958 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.465000 4958 topology_manager.go:138] "Creating topology manager with none policy" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.465019 4958 container_manager_linux.go:303] "Creating device plugin manager" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.465689 4958 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.465743 4958 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.466592 4958 state_mem.go:36] "Initialized new in-memory state store" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.466736 4958 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.470565 4958 kubelet.go:418] "Attempting to sync node with API server" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.470605 4958 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.470630 4958 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.470652 4958 kubelet.go:324] "Adding apiserver pod source" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.470671 4958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.475190 4958 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.476237 4958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.479188 4958 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.479380 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.479585 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.479432 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.479809 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480794 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480841 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480857 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480872 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480895 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480908 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480921 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480941 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.480988 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.481005 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.481025 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.481040 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.483193 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.483803 4958 server.go:1280] "Started kubelet" Oct 08 06:34:17 crc systemd[1]: Started Kubernetes Kubelet. Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.489431 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.489422 4958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.489250 4958 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.490224 4958 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495183 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495246 4958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495641 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:30:44.22918463 +0000 UTC Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495740 4958 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.495776 4958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495706 4958 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495850 4958 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.495753 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1700h56m26.733436623s for next certificate rotation Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.495078 4958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186c7081dea98391 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-08 06:34:17.483764625 +0000 UTC m=+0.613457256,LastTimestamp:2025-10-08 06:34:17.483764625 +0000 UTC m=+0.613457256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.496940 4958 server.go:460] "Adding debug handlers to kubelet server" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.496914 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="200ms" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.499648 4958 factory.go:55] Registering systemd factory Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.499687 4958 factory.go:221] Registration of the systemd container factory successfully Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.500089 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.500203 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.500266 4958 factory.go:153] Registering CRI-O factory Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.500303 4958 factory.go:221] Registration of the crio container factory successfully Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.500431 4958 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.500502 4958 factory.go:103] Registering Raw factory Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.501039 4958 manager.go:1196] Started watching for new ooms in manager Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.502398 4958 manager.go:319] Starting recovery of all containers Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.507880 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.507982 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.507994 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508004 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508016 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508027 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508061 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508071 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508083 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508094 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508103 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508112 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508121 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508131 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508144 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508153 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508161 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508173 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508183 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508242 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508254 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508263 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508276 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508287 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508296 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508308 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508395 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508410 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508421 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508434 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508444 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508453 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508464 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508477 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508488 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508499 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508510 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508520 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508532 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508542 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508551 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508564 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508575 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508587 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508597 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508607 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508616 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508626 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508637 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508670 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508680 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508691 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508705 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508715 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508725 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508737 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508749 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508758 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508770 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508778 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508788 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508797 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508805 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508816 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508827 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508839 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508849 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508858 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508867 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508876 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508886 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508896 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508904 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508913 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508923 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508930 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.508946 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509035 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509073 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509083 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509093 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509102 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509131 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509142 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509151 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509161 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509169 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509179 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509207 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509219 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509230 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509240 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509250 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509279 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509289 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509300 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509310 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509321 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509350 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509360 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509371 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509383 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509411 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509423 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509462 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509474 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509500 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509512 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509522 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509532 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509566 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509576 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509584 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509594 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509623 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509667 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509678 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509688 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509697 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509725 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509751 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509764 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509821 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509830 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509857 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509866 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509895 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509906 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509916 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.509983 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510013 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510022 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510031 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510041 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510125 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510137 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510163 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510174 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510202 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510212 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510221 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510248 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510260 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510292 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510302 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510312 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510346 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510375 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510385 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510394 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510403 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510411 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510421 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510465 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.510499 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.511208 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.511295 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.511333 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512088 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512129 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512387 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512497 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512549 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512573 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512658 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512684 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512705 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.512729 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.513044 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.513436 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.513498 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.516789 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.516828 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.516852 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.516887 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.516908 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.516930 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517010 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517035 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517066 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517087 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517107 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517127 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517148 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517169 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517188 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517212 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.517235 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521118 4958 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521230 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521285 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521314 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521340 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521380 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521408 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521443 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521466 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521491 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521527 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521550 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521584 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521614 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521648 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521690 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521713 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521751 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521776 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521798 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521818 4958 reconstruct.go:97] "Volume reconstruction finished" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.521833 4958 reconciler.go:26] "Reconciler: start to sync state" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.540227 4958 manager.go:324] Recovery completed Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.558238 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.561512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.561583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.561603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.563104 4958 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.563144 4958 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.563208 4958 state_mem.go:36] "Initialized new in-memory state store" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.572662 4958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.575150 4958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.575204 4958 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.575238 4958 kubelet.go:2335] "Starting kubelet main sync loop" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.575433 4958 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 08 06:34:17 crc kubenswrapper[4958]: W1008 06:34:17.577680 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.577835 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.587059 4958 policy_none.go:49] "None policy: Start" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.591683 4958 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.591741 4958 state_mem.go:35] "Initializing new in-memory state store" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.596008 4958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.653999 4958 manager.go:334] "Starting Device Plugin manager" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.654062 4958 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.654078 4958 server.go:79] "Starting device plugin registration server" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.654703 4958 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.654731 4958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.655598 4958 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.655789 4958 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.655812 4958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.665890 4958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.676390 4958 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.676533 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.678887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.679017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.679042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.679292 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.679464 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.679532 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.680618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.680673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.680693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.680879 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.680928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.681012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.681031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.681070 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.681141 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682416 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682775 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.682898 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.683847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.683896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.683913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.684729 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.684901 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.685005 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.685862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.685919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.685978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686615 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.686906 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.687021 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.688339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.688386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.688404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.699781 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="400ms" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724018 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724085 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724154 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724176 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724221 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724334 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724355 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.724375 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.755530 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.756709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.756769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.756790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.756827 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.757503 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825317 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825433 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825502 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825569 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825612 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825696 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825655 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825709 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825726 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825736 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825778 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825816 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825888 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.825937 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826022 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826076 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826121 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826171 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826230 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.826277 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.958661 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.960150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.960254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.960325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:17 crc kubenswrapper[4958]: I1008 06:34:17.960365 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:17 crc kubenswrapper[4958]: E1008 06:34:17.961133 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.006166 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.014417 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.020008 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.044776 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.054014 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:18 crc kubenswrapper[4958]: E1008 06:34:18.102010 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="800ms" Oct 08 06:34:18 crc kubenswrapper[4958]: W1008 06:34:18.303535 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a9a86b42fdf6d92518641abfee2127ae32d4828b816983b3582adb3b70e87fb0 WatchSource:0}: Error finding container a9a86b42fdf6d92518641abfee2127ae32d4828b816983b3582adb3b70e87fb0: Status 404 returned error can't find the container with id a9a86b42fdf6d92518641abfee2127ae32d4828b816983b3582adb3b70e87fb0 Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.362001 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.364468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.364548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.364571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.364625 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:18 crc kubenswrapper[4958]: E1008 06:34:18.365234 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.491404 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:18 crc kubenswrapper[4958]: W1008 06:34:18.535695 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:18 crc kubenswrapper[4958]: E1008 06:34:18.535851 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.580419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9a86b42fdf6d92518641abfee2127ae32d4828b816983b3582adb3b70e87fb0"} Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.582414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"22e086668d6a0794a88242c7e3081c40b6a2470e0a9e40b5fa8e2efabd7aad4c"} Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.584141 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b96477f1a0e1ac8fc41d4f7d11d8d50ebcb5071e272325cef64c7e40c2f93ba5"} Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.585738 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e1a1a7a9901f226181df681719584518dd3f1ba41d217e70a05454dbfb02cd58"} Oct 08 06:34:18 crc kubenswrapper[4958]: I1008 06:34:18.587921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4eb1d83c5f53e6c54d71822208e43d250546b0fe808b6f96aed5e09f6d607aa"} Oct 08 06:34:18 crc kubenswrapper[4958]: W1008 06:34:18.597570 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:18 crc kubenswrapper[4958]: E1008 06:34:18.597665 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:18 crc kubenswrapper[4958]: W1008 06:34:18.614867 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:18 crc kubenswrapper[4958]: E1008 06:34:18.614931 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:18 crc kubenswrapper[4958]: E1008 06:34:18.902639 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="1.6s" Oct 08 06:34:19 crc kubenswrapper[4958]: W1008 06:34:19.074733 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:19 crc kubenswrapper[4958]: E1008 06:34:19.075247 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.166206 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.168001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.168056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.168076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.168109 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:19 crc kubenswrapper[4958]: E1008 06:34:19.168599 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.490679 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.594025 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862"} Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.594102 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb"} Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.596811 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c" exitCode=0 Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.596899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c"} Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.597117 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.598879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.599012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.599030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.599519 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3" exitCode=0 Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.599633 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3"} Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.599819 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.601472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.601532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.601556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.602605 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a" exitCode=0 Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.602724 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a"} Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.602749 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.603911 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.604330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.604380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.604398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.605238 4958 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756" exitCode=0 Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.605287 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756"} Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.605325 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.605541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.605585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.605602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.608209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.608278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:19 crc kubenswrapper[4958]: I1008 06:34:19.608309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:20 crc kubenswrapper[4958]: E1008 06:34:20.011514 4958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186c7081dea98391 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-08 06:34:17.483764625 +0000 UTC m=+0.613457256,LastTimestamp:2025-10-08 06:34:17.483764625 +0000 UTC m=+0.613457256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.490342 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:20 crc kubenswrapper[4958]: E1008 06:34:20.503450 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="3.2s" Oct 08 06:34:20 crc kubenswrapper[4958]: W1008 06:34:20.507927 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:20 crc kubenswrapper[4958]: E1008 06:34:20.508024 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.612316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.612371 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.612387 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.612400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.614282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4f9d29d167616433e4a0ae16e679fe5f93cbb3e05a7d49c9048b17d124133896"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.614406 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.615402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.615436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.615447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.618937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.619080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.619112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.619349 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.620695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.620730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.620741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.630988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.631044 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.631154 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.632355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.632406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.632424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.634804 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab" exitCode=0 Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.634851 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab"} Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.635001 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.636272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.636302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.636314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.647023 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.656522 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:20 crc kubenswrapper[4958]: W1008 06:34:20.753021 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Oct 08 06:34:20 crc kubenswrapper[4958]: E1008 06:34:20.753129 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.769727 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.771134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.771175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.771188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.771218 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:20 crc kubenswrapper[4958]: E1008 06:34:20.771785 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Oct 08 06:34:20 crc kubenswrapper[4958]: I1008 06:34:20.881872 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.643024 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326" exitCode=0 Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.643178 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326"} Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.643214 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.644472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.644522 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.644539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.649109 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9"} Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.649166 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.649269 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.649376 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.649376 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.650494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.650542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.650559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651235 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:21 crc kubenswrapper[4958]: I1008 06:34:21.651471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.159271 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.657622 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b"} Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.657699 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.657771 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.657703 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92"} Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.657863 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.657935 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142"} Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.658054 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.658322 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.659234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.659280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.659298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.660177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.660245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.660272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.660241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.660334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:22 crc kubenswrapper[4958]: I1008 06:34:22.660360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.371078 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.667138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7"} Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.667184 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4"} Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.667242 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.667342 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.668656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.668687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.668698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.669711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.669741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.669751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.971892 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.973874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.973940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.973989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:23 crc kubenswrapper[4958]: I1008 06:34:23.974030 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.669500 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.669514 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.671028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.671089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.671108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.671609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.671659 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:24 crc kubenswrapper[4958]: I1008 06:34:24.671677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:26 crc kubenswrapper[4958]: I1008 06:34:26.185408 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:26 crc kubenswrapper[4958]: I1008 06:34:26.185555 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 06:34:26 crc kubenswrapper[4958]: I1008 06:34:26.185603 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:26 crc kubenswrapper[4958]: I1008 06:34:26.187862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:26 crc kubenswrapper[4958]: I1008 06:34:26.187933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:26 crc kubenswrapper[4958]: I1008 06:34:26.187996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.258668 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.258936 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.260489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.260531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.260543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:27 crc kubenswrapper[4958]: E1008 06:34:27.666092 4958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.823258 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.823470 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.825074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.825122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:27 crc kubenswrapper[4958]: I1008 06:34:27.825140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:28 crc kubenswrapper[4958]: I1008 06:34:28.081074 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 08 06:34:28 crc kubenswrapper[4958]: I1008 06:34:28.081301 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:28 crc kubenswrapper[4958]: I1008 06:34:28.083096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:28 crc kubenswrapper[4958]: I1008 06:34:28.083162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:28 crc kubenswrapper[4958]: I1008 06:34:28.083181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:29 crc kubenswrapper[4958]: I1008 06:34:29.185943 4958 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 08 06:34:29 crc kubenswrapper[4958]: I1008 06:34:29.186101 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 06:34:31 crc kubenswrapper[4958]: W1008 06:34:31.268989 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.269091 4958 trace.go:236] Trace[116387521]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 06:34:21.267) (total time: 10001ms): Oct 08 06:34:31 crc kubenswrapper[4958]: Trace[116387521]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:34:31.268) Oct 08 06:34:31 crc kubenswrapper[4958]: Trace[116387521]: [10.001942614s] [10.001942614s] END Oct 08 06:34:31 crc kubenswrapper[4958]: E1008 06:34:31.269114 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 08 06:34:31 crc kubenswrapper[4958]: W1008 06:34:31.382376 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.382512 4958 trace.go:236] Trace[370313254]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 06:34:21.381) (total time: 10001ms): Oct 08 06:34:31 crc kubenswrapper[4958]: Trace[370313254]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:34:31.382) Oct 08 06:34:31 crc kubenswrapper[4958]: Trace[370313254]: [10.001229776s] [10.001229776s] END Oct 08 06:34:31 crc kubenswrapper[4958]: E1008 06:34:31.382541 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.492303 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.511923 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.512178 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.513437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.513502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.513521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.640210 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.640307 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.647699 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 08 06:34:31 crc kubenswrapper[4958]: I1008 06:34:31.647791 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 08 06:34:32 crc kubenswrapper[4958]: I1008 06:34:32.171060 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]log ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]etcd ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/generic-apiserver-start-informers ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/priority-and-fairness-filter ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-apiextensions-informers ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-apiextensions-controllers ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/crd-informer-synced ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-system-namespaces-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 08 06:34:32 crc kubenswrapper[4958]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 08 06:34:32 crc kubenswrapper[4958]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/bootstrap-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/start-kube-aggregator-informers ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-registration-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-discovery-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]autoregister-completion ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-openapi-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 08 06:34:32 crc kubenswrapper[4958]: livez check failed Oct 08 06:34:32 crc kubenswrapper[4958]: I1008 06:34:32.171163 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:34:35 crc kubenswrapper[4958]: I1008 06:34:35.772902 4958 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 08 06:34:36 crc kubenswrapper[4958]: E1008 06:34:36.627191 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.627644 4958 trace.go:236] Trace[1940540753]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 06:34:25.390) (total time: 11237ms): Oct 08 06:34:36 crc kubenswrapper[4958]: Trace[1940540753]: ---"Objects listed" error: 11237ms (06:34:36.627) Oct 08 06:34:36 crc kubenswrapper[4958]: Trace[1940540753]: [11.237271577s] [11.237271577s] END Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.627689 4958 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.628516 4958 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.632920 4958 trace.go:236] Trace[920281698]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Oct-2025 06:34:24.085) (total time: 12547ms): Oct 08 06:34:36 crc kubenswrapper[4958]: Trace[920281698]: ---"Objects listed" error: 12547ms (06:34:36.632) Oct 08 06:34:36 crc kubenswrapper[4958]: Trace[920281698]: [12.547377893s] [12.547377893s] END Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.632989 4958 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 08 06:34:36 crc kubenswrapper[4958]: E1008 06:34:36.636899 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.697236 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.697309 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.697250 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59940->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 08 06:34:36 crc kubenswrapper[4958]: I1008 06:34:36.697474 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59940->192.168.126.11:17697: read: connection reset by peer" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.167372 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.168274 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.168348 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.172054 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.263398 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.443794 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.483497 4958 apiserver.go:52] "Watching apiserver" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.487526 4958 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.488135 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-h5npb","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-qd84r","openshift-multus/multus-additional-cni-plugins-62gnw","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-hfzs9","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.488703 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.488755 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.488826 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.489004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.489019 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.489018 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.489365 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.489464 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.489633 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.490507 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.490680 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.495287 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498546 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498560 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498626 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498644 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498671 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498551 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.498854 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499028 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499063 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499293 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499301 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499717 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499804 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499881 4958 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.499987 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.500324 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.501654 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.502139 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504167 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504255 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504470 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504494 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504518 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504694 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504805 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.504839 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.520644 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.531647 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.532856 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.532893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.532919 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.532810 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533359 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533451 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533511 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533877 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533933 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533976 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.533999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534021 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534043 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534065 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534084 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534100 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534117 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534162 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534182 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534205 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534244 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534263 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536954 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536984 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537008 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537046 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537070 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537107 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537126 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537142 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537159 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537178 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537200 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537220 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537255 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537274 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537301 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537336 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537366 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537388 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537410 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537430 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537449 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537467 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537483 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537501 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537520 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537537 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537592 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537609 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537625 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537723 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537739 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537757 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537778 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537797 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537814 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537894 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.537930 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534904 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534330 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534711 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.534785 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.535017 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.535329 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.535478 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.538763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.535881 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.538768 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536417 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536513 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536675 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.538862 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536889 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.538497 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.538565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.536083 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539115 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539274 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539470 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539480 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539532 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539525 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.539816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540069 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540109 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540140 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540170 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540991 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541054 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541578 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541631 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541694 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541803 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542029 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542103 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542150 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542197 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542239 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542276 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542345 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540139 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540404 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540432 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540559 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540699 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540709 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540800 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540815 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.540847 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541087 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541102 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541223 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541428 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541465 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542145 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542205 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542704 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.543023 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.543135 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.543649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541646 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.542329 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.544379 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.544612 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.544803 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.544892 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545040 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545161 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545262 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545315 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545322 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545387 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.541724 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545646 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545374 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545686 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545772 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545795 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545877 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545898 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545924 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545963 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.545985 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546003 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546043 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546060 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546080 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546097 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546142 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546160 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546165 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546178 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546256 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546293 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546321 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546573 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546627 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546654 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546679 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546784 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546809 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546837 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546865 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546888 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546917 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546942 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.546987 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547031 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547055 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547086 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547113 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547141 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547165 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547191 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547215 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547240 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547265 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547289 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547312 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547361 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547388 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547445 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547472 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547498 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547526 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547554 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547583 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547609 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547635 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547746 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547770 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547265 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.547499 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.548147 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:34:38.048122262 +0000 UTC m=+21.177814873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558097 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558143 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558175 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558231 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558251 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558354 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558403 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558472 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558480 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558508 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558551 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558655 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558708 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558713 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558739 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558767 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558792 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558817 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558841 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558885 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558912 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.558918 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559049 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559075 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559098 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559107 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559178 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.549492 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.549708 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.549829 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559260 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.552127 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.552256 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.553881 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.554211 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.554189 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.554630 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.554652 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.554836 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.554906 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559480 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.555060 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.555177 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.555657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.555907 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.555980 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.556163 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.556198 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.556252 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.557328 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.557414 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.557431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.557498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.557804 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.550149 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559274 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.559120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560004 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560049 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560091 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560125 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560153 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560183 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560215 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560241 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560269 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560296 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560325 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560350 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560376 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560482 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-cnibin\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560497 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560527 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560578 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560667 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560673 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-os-release\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.560933 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.561015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-multus-certs\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.561068 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/88f9eac3-839a-4b10-9668-a63915d5fe90-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.561098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtddh\" (UniqueName: \"kubernetes.io/projected/88f9eac3-839a-4b10-9668-a63915d5fe90-kube-api-access-wtddh\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.561122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.561143 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-os-release\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.561167 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.562307 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.562983 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563164 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563196 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563207 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563142 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563724 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563736 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.563711 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564190 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564409 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564660 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9e6284b-565d-4277-9ebf-62d3623b249b-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-cni-multus\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564852 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3-hosts-file\") pod \"node-resolver-h5npb\" (UID: \"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\") " pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564889 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9e6284b-565d-4277-9ebf-62d3623b249b-proxy-tls\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564928 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96q8j\" (UniqueName: \"kubernetes.io/projected/c9e6284b-565d-4277-9ebf-62d3623b249b-kube-api-access-96q8j\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.564989 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-cni-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565033 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565072 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-netns\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565110 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565144 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-system-cni-dir\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565177 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnlnp\" (UniqueName: \"kubernetes.io/projected/ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3-kube-api-access-cnlnp\") pod \"node-resolver-h5npb\" (UID: \"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\") " pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565323 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.565865 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.566640 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.548901 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567167 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567688 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567738 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567833 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567871 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.568135 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.568191 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567966 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.568354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.568678 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.568709 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.568929 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.567216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569104 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569136 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-system-cni-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569310 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0718b244-4835-4551-9013-6b3741845bb4-cni-binary-copy\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569335 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-etc-kubernetes\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-tuning-conf-dir\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569379 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-kubelet\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569412 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569437 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-hostroot\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-cnibin\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569437 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-k8s-cni-cncf-io\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569616 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-conf-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569641 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqskl\" (UniqueName: \"kubernetes.io/projected/0718b244-4835-4551-9013-6b3741845bb4-kube-api-access-fqskl\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569658 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569902 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570129 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.569436 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570281 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570059 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570618 4958 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570645 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570678 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570685 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-socket-dir-parent\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.570770 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.570919 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:38.070840109 +0000 UTC m=+21.200532720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.570980 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.571341 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.571693 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.571738 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572119 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572182 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-cni-bin\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572254 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572463 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572613 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572714 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.572878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c9e6284b-565d-4277-9ebf-62d3623b249b-rootfs\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.573284 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.573379 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:38.073352975 +0000 UTC m=+21.203045586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.573410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0718b244-4835-4551-9013-6b3741845bb4-multus-daemon-config\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.573463 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.573482 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88f9eac3-839a-4b10-9668-a63915d5fe90-cni-binary-copy\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.573831 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.574200 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.574692 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.575998 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577182 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577218 4958 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577236 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577254 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577270 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577284 4958 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577298 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577311 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577327 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577340 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577353 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577367 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577382 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577395 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577407 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577419 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577432 4958 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577444 4958 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577457 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577472 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577487 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577503 4958 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577518 4958 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577533 4958 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577548 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577565 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577580 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577595 4958 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577610 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577625 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577640 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577655 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577668 4958 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577682 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577698 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577712 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577727 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577741 4958 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577757 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577771 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.579043 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.578329 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.579470 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.577787 4958 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580640 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580656 4958 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580672 4958 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580685 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580697 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580709 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580725 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580746 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580765 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580780 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580791 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580804 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580826 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580838 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580849 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580861 4958 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580874 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580888 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580900 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580913 4958 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580925 4958 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580937 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580970 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580982 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.580993 4958 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581007 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581018 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581032 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581043 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581057 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581074 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581088 4958 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581104 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581119 4958 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.581135 4958 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582141 4958 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582166 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582225 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582251 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582269 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582288 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582305 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582321 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582341 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582359 4958 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582375 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582393 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582409 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582427 4958 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582444 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582459 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582476 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582518 4958 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582533 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582547 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582562 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582577 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582592 4958 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582716 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582735 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582751 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582766 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582783 4958 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582799 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582814 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582829 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582860 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582877 4958 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582893 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582908 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582923 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582939 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.582988 4958 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583006 4958 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583029 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583047 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583063 4958 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583082 4958 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583099 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583118 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583135 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583152 4958 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583389 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583453 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583470 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583482 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583505 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583521 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583534 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583546 4958 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583560 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583573 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583585 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583596 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583610 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583622 4958 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583635 4958 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583645 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583657 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583669 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583683 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583695 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583707 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583720 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583731 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583742 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583754 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583766 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.583892 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585318 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585350 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585364 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585406 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585430 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585447 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.585440 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:38.085416491 +0000 UTC m=+21.215109102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.586485 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.586710 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:38.086662494 +0000 UTC m=+21.216355085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.587416 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.587729 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.588145 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.588349 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.588539 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.589605 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.590001 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.591419 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.592026 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.592312 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.592783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.593833 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.594791 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.595029 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.595384 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.595556 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.595982 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.595987 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.596147 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.596636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.597002 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.598071 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.600551 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.602083 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.603378 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.604678 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.608078 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.609281 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.618213 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.622963 4958 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.627628 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.628663 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.629528 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.632023 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.636904 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.637080 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.637203 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.637335 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.639076 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.640421 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.641019 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.642288 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.642815 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.643407 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.644552 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.645143 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.646154 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.646544 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.647232 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.647959 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.654570 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.655084 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.656291 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.656467 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.656931 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.658483 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.658916 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.664582 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.672056 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.672569 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.673143 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.674017 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.674484 4958 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.674586 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.675288 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.677083 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.677567 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.677968 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.679512 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.680867 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.681428 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.683340 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684030 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684269 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9e6284b-565d-4277-9ebf-62d3623b249b-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-cni-multus\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684338 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3-hosts-file\") pod \"node-resolver-h5npb\" (UID: \"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\") " pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9e6284b-565d-4277-9ebf-62d3623b249b-proxy-tls\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96q8j\" (UniqueName: \"kubernetes.io/projected/c9e6284b-565d-4277-9ebf-62d3623b249b-kube-api-access-96q8j\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-cni-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-netns\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-system-cni-dir\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684457 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-system-cni-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684473 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0718b244-4835-4551-9013-6b3741845bb4-cni-binary-copy\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-etc-kubernetes\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684508 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-tuning-conf-dir\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684524 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnlnp\" (UniqueName: \"kubernetes.io/projected/ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3-kube-api-access-cnlnp\") pod \"node-resolver-h5npb\" (UID: \"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\") " pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-kubelet\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684566 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-hostroot\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684584 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-cnibin\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.684888 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-netns\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.685578 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.685823 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9e6284b-565d-4277-9ebf-62d3623b249b-mcd-auth-proxy-config\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.685935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-cni-multus\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.686171 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-etc-kubernetes\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.686248 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.686263 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-system-cni-dir\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.686317 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-system-cni-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.686524 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3-hosts-file\") pod \"node-resolver-h5npb\" (UID: \"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\") " pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0718b244-4835-4551-9013-6b3741845bb4-cni-binary-copy\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-tuning-conf-dir\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687430 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-hostroot\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687451 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687457 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-kubelet\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687675 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-cni-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-cnibin\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687840 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-k8s-cni-cncf-io\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687874 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-conf-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687892 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqskl\" (UniqueName: \"kubernetes.io/projected/0718b244-4835-4551-9013-6b3741845bb4-kube-api-access-fqskl\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687933 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-socket-dir-parent\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.687967 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-cni-bin\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688008 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c9e6284b-565d-4277-9ebf-62d3623b249b-rootfs\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688026 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0718b244-4835-4551-9013-6b3741845bb4-multus-daemon-config\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88f9eac3-839a-4b10-9668-a63915d5fe90-cni-binary-copy\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688063 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-cnibin\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688088 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-os-release\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688136 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-multus-certs\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688197 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/88f9eac3-839a-4b10-9668-a63915d5fe90-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688221 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtddh\" (UniqueName: \"kubernetes.io/projected/88f9eac3-839a-4b10-9668-a63915d5fe90-kube-api-access-wtddh\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688256 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-os-release\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688368 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688381 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688395 4958 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688407 4958 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688417 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688428 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688427 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-k8s-cni-cncf-io\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688441 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688454 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688466 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688466 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-conf-dir\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688479 4958 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688506 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688532 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688543 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-run-multus-certs\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688545 4958 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688573 4958 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688584 4958 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-multus-socket-dir-parent\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688926 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-os-release\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/88f9eac3-839a-4b10-9668-a63915d5fe90-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689251 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-cnibin\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689279 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88f9eac3-839a-4b10-9668-a63915d5fe90-os-release\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.688594 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689379 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689390 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689405 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689416 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689425 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689434 4958 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689443 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689452 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689461 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689471 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689480 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689488 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689497 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689507 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689517 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689533 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689545 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689554 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689564 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689576 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689586 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689595 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689606 4958 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689615 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689625 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689636 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689645 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689655 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689664 4958 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689673 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.689686 4958 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.698501 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9e6284b-565d-4277-9ebf-62d3623b249b-proxy-tls\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.698678 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0718b244-4835-4551-9013-6b3741845bb4-host-var-lib-cni-bin\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.698825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c9e6284b-565d-4277-9ebf-62d3623b249b-rootfs\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.699372 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0718b244-4835-4551-9013-6b3741845bb4-multus-daemon-config\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.701521 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.702143 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.702661 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.703379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88f9eac3-839a-4b10-9668-a63915d5fe90-cni-binary-copy\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.703711 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.704226 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.704300 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.704694 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.706255 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.706475 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.706615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96q8j\" (UniqueName: \"kubernetes.io/projected/c9e6284b-565d-4277-9ebf-62d3623b249b-kube-api-access-96q8j\") pod \"machine-config-daemon-qd84r\" (UID: \"c9e6284b-565d-4277-9ebf-62d3623b249b\") " pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.707031 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.707600 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.708072 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.709106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqskl\" (UniqueName: \"kubernetes.io/projected/0718b244-4835-4551-9013-6b3741845bb4-kube-api-access-fqskl\") pod \"multus-hfzs9\" (UID: \"0718b244-4835-4551-9013-6b3741845bb4\") " pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.711466 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89qtf"] Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.712398 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.712918 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9" exitCode=255 Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.713069 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9"} Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.718980 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719238 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.719260 4958 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719391 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnlnp\" (UniqueName: \"kubernetes.io/projected/ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3-kube-api-access-cnlnp\") pod \"node-resolver-h5npb\" (UID: \"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\") " pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719415 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719507 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719550 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719747 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.719987 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.720830 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.725442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtddh\" (UniqueName: \"kubernetes.io/projected/88f9eac3-839a-4b10-9668-a63915d5fe90-kube-api-access-wtddh\") pod \"multus-additional-cni-plugins-62gnw\" (UID: \"88f9eac3-839a-4b10-9668-a63915d5fe90\") " pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: E1008 06:34:37.725593 4958 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.725700 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.725854 4958 scope.go:117] "RemoveContainer" containerID="6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.749585 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.760382 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.772469 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.781313 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.791682 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.806760 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.809716 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.815701 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.820583 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.824125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.832752 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: W1008 06:34:37.835560 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4002a63fa0ee6cf25a5e20a60038f6d8f5a487df7c079f525fb52fafe0ced742 WatchSource:0}: Error finding container 4002a63fa0ee6cf25a5e20a60038f6d8f5a487df7c079f525fb52fafe0ced742: Status 404 returned error can't find the container with id 4002a63fa0ee6cf25a5e20a60038f6d8f5a487df7c079f525fb52fafe0ced742 Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.839024 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-h5npb" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.845819 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.848918 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.855239 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.882634 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29trw\" (UniqueName: \"kubernetes.io/projected/272f74a5-c381-4909-b8a9-da60cbd17ddf-kube-api-access-29trw\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891718 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-ovn-kubernetes\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891766 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891832 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-config\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891852 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovn-node-metrics-cert\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891877 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-systemd-units\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-script-lib\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891924 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-slash\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891958 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-env-overrides\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.891992 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-netd\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-etc-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-bin\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892111 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-log-socket\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-netns\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-kubelet\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892350 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-systemd\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-ovn\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.892975 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-var-lib-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.893049 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-node-log\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: W1008 06:34:37.896214 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9e6284b_565d_4277_9ebf_62d3623b249b.slice/crio-e838ec982ea212c1c83e5f013262456c4e6a00ba1a06f600a0e0cc1d400a048c WatchSource:0}: Error finding container e838ec982ea212c1c83e5f013262456c4e6a00ba1a06f600a0e0cc1d400a048c: Status 404 returned error can't find the container with id e838ec982ea212c1c83e5f013262456c4e6a00ba1a06f600a0e0cc1d400a048c Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.899118 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.917457 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hfzs9" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.924527 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-62gnw" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.926544 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.938921 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.953742 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.963619 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.978842 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994124 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-ovn-kubernetes\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994178 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-config\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovn-node-metrics-cert\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994238 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-systemd-units\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994254 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-script-lib\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994269 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-env-overrides\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-ovn-kubernetes\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-slash\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-slash\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994493 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-netd\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-etc-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-bin\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994579 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-log-socket\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994603 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-netns\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-kubelet\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-systemd\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994679 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-ovn\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.994750 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-var-lib-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995003 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-node-log\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995078 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29trw\" (UniqueName: \"kubernetes.io/projected/272f74a5-c381-4909-b8a9-da60cbd17ddf-kube-api-access-29trw\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995526 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-netd\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-etc-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-bin\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995694 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-log-socket\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-ovn\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-systemd\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995765 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-var-lib-openvswitch\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995786 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-netns\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-node-log\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-kubelet\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-systemd-units\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.995984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-config\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.996394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-env-overrides\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.996415 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-script-lib\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:37 crc kubenswrapper[4958]: I1008 06:34:37.999674 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.002770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovn-node-metrics-cert\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.009638 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.012901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29trw\" (UniqueName: \"kubernetes.io/projected/272f74a5-c381-4909-b8a9-da60cbd17ddf-kube-api-access-29trw\") pod \"ovnkube-node-89qtf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.019280 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.043005 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.045820 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.076089 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.096145 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.096331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.096363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.096391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.096423 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096557 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096624 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:39.096606167 +0000 UTC m=+22.226298768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096687 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:34:39.096681369 +0000 UTC m=+22.226373970 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096762 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096781 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096795 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096822 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:39.096816142 +0000 UTC m=+22.226508743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096872 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096883 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096890 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.096910 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:39.096904415 +0000 UTC m=+22.226597006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.097038 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: E1008 06:34:38.097181 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:39.097136791 +0000 UTC m=+22.226829392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.100783 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.116487 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: W1008 06:34:38.134686 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272f74a5_c381_4909_b8a9_da60cbd17ddf.slice/crio-4f0203b1da4fcd68233969c68fd9f1006afce0f8b9153a0df18ef6dc6956d7b0 WatchSource:0}: Error finding container 4f0203b1da4fcd68233969c68fd9f1006afce0f8b9153a0df18ef6dc6956d7b0: Status 404 returned error can't find the container with id 4f0203b1da4fcd68233969c68fd9f1006afce0f8b9153a0df18ef6dc6956d7b0 Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.140191 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.155845 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.176162 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.717629 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" exitCode=0 Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.717733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.718164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"4f0203b1da4fcd68233969c68fd9f1006afce0f8b9153a0df18ef6dc6956d7b0"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.723172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h5npb" event={"ID":"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3","Type":"ContainerStarted","Data":"5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.723234 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-h5npb" event={"ID":"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3","Type":"ContainerStarted","Data":"d89b28fa8ef4e4d052bd17d75d7df222a3c52b3d521b03e08bb4341e2c78f559"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.725134 4958 generic.go:334] "Generic (PLEG): container finished" podID="88f9eac3-839a-4b10-9668-a63915d5fe90" containerID="86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5" exitCode=0 Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.725228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerDied","Data":"86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.725301 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerStarted","Data":"9ecfaaae243d27873932cf928841d676b49d0fb155f0b88f2e30c0f38a20f03e"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.728357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerStarted","Data":"de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.728825 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerStarted","Data":"2687ed85765014b97a2444b5ca92682a1f32ef3276fe1e9c54df440df9dfaa20"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.731143 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.731180 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.731195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"e838ec982ea212c1c83e5f013262456c4e6a00ba1a06f600a0e0cc1d400a048c"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.732487 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d4e082f2ff414603776660758ba2f36666e42720b299981e8d49f962ed1457f9"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.734919 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.737064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.737425 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.738663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.738715 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"672a6a9e288c6f63e6527b2b8ee4ebc715333bc6829c9cd3d23e048ac0814cfe"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.739344 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.742364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.742421 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.742438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4002a63fa0ee6cf25a5e20a60038f6d8f5a487df7c079f525fb52fafe0ced742"} Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.757297 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.776272 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.792694 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.808992 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.823062 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.841521 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.859058 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.871776 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.885151 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.896506 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.908510 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.928427 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.942110 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.956746 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.972590 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:38 crc kubenswrapper[4958]: I1008 06:34:38.985490 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:38Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.007519 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.024863 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.039886 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.052731 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.072239 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.086372 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.105771 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.116499 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.116602 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.116628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.116650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.116673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.116772 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.116822 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:41.116808021 +0000 UTC m=+24.246500622 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117174 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:34:41.11716384 +0000 UTC m=+24.246856451 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117247 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117263 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117282 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117307 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:41.117300664 +0000 UTC m=+24.246993265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117337 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117356 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:41.117351435 +0000 UTC m=+24.247044026 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117415 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117427 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117435 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.117455 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:41.117449537 +0000 UTC m=+24.247142128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.121902 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.134056 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.575682 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.575748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.576450 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.576183 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.577143 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:39 crc kubenswrapper[4958]: E1008 06:34:39.577186 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.581202 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lvkj9"] Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.581538 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.586110 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.586396 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.586585 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.586701 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.605819 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.620467 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.634813 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.648576 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.662074 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.676028 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.698200 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.721075 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.721536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twqg\" (UniqueName: \"kubernetes.io/projected/db7be8dd-af57-4e4c-bd7a-333a42b796bf-kube-api-access-9twqg\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.721595 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/db7be8dd-af57-4e4c-bd7a-333a42b796bf-serviceca\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.721623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db7be8dd-af57-4e4c-bd7a-333a42b796bf-host\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.744674 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.748282 4958 generic.go:334] "Generic (PLEG): container finished" podID="88f9eac3-839a-4b10-9668-a63915d5fe90" containerID="2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462" exitCode=0 Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.748353 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerDied","Data":"2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462"} Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.752825 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.752852 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.752862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.752872 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.784272 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.804551 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.822963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twqg\" (UniqueName: \"kubernetes.io/projected/db7be8dd-af57-4e4c-bd7a-333a42b796bf-kube-api-access-9twqg\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.823010 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/db7be8dd-af57-4e4c-bd7a-333a42b796bf-serviceca\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.823032 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db7be8dd-af57-4e4c-bd7a-333a42b796bf-host\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.823098 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db7be8dd-af57-4e4c-bd7a-333a42b796bf-host\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.823994 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/db7be8dd-af57-4e4c-bd7a-333a42b796bf-serviceca\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.843230 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.854655 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twqg\" (UniqueName: \"kubernetes.io/projected/db7be8dd-af57-4e4c-bd7a-333a42b796bf-kube-api-access-9twqg\") pod \"node-ca-lvkj9\" (UID: \"db7be8dd-af57-4e4c-bd7a-333a42b796bf\") " pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.870462 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.893294 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.920728 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.933938 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.953530 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.971082 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:39 crc kubenswrapper[4958]: I1008 06:34:39.985142 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.001763 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:39Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.013109 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.026820 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.057361 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.093766 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.110674 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lvkj9" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.133627 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: W1008 06:34:40.158055 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb7be8dd_af57_4e4c_bd7a_333a42b796bf.slice/crio-5c297b35ce645d2972bde30e87f8d4d7031ff00e88f729be4b0845524b374e31 WatchSource:0}: Error finding container 5c297b35ce645d2972bde30e87f8d4d7031ff00e88f729be4b0845524b374e31: Status 404 returned error can't find the container with id 5c297b35ce645d2972bde30e87f8d4d7031ff00e88f729be4b0845524b374e31 Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.177369 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.222264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.257102 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.759180 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lvkj9" event={"ID":"db7be8dd-af57-4e4c-bd7a-333a42b796bf","Type":"ContainerStarted","Data":"173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db"} Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.760450 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lvkj9" event={"ID":"db7be8dd-af57-4e4c-bd7a-333a42b796bf","Type":"ContainerStarted","Data":"5c297b35ce645d2972bde30e87f8d4d7031ff00e88f729be4b0845524b374e31"} Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.763460 4958 generic.go:334] "Generic (PLEG): container finished" podID="88f9eac3-839a-4b10-9668-a63915d5fe90" containerID="f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af" exitCode=0 Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.763610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerDied","Data":"f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af"} Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.768737 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914"} Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.779918 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.780282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.781169 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.815361 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.837016 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.861361 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.877827 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.902378 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.921525 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.938676 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.953521 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.965133 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.980893 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:40 crc kubenswrapper[4958]: I1008 06:34:40.994240 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:40Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.007985 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.019024 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.033863 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.052233 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.071716 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.084340 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.097429 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.112358 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.128311 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.139686 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.141031 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.141113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.141157 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.141183 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.141212 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141256 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:34:45.141222687 +0000 UTC m=+28.270915308 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141339 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141360 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141375 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141376 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141403 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141339 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141462 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141478 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141434 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:45.141412862 +0000 UTC m=+28.271105473 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141541 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:45.141527935 +0000 UTC m=+28.271220546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141558 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:45.141549076 +0000 UTC m=+28.271241687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.141572 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:45.141565236 +0000 UTC m=+28.271257847 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.177567 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.214067 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.260117 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.295546 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.335395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.379825 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.546376 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.563140 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.563566 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.565769 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.575458 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.575475 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.575558 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.575462 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.575632 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:41 crc kubenswrapper[4958]: E1008 06:34:41.575759 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.582812 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.598750 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.615193 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.627074 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.647509 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.680135 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.718919 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.755719 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.788282 4958 generic.go:334] "Generic (PLEG): container finished" podID="88f9eac3-839a-4b10-9668-a63915d5fe90" containerID="daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec" exitCode=0 Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.788613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerDied","Data":"daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec"} Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.801524 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.855970 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.879248 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.919050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.959249 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:41 crc kubenswrapper[4958]: I1008 06:34:41.996328 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:41Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.036924 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.076552 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.116347 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.153416 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.199474 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.238489 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.282878 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.321223 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.362606 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.411354 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.436116 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.489196 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.516290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.556224 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.798425 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.803068 4958 generic.go:334] "Generic (PLEG): container finished" podID="88f9eac3-839a-4b10-9668-a63915d5fe90" containerID="e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292" exitCode=0 Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.803118 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerDied","Data":"e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292"} Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.824212 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.847094 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.868179 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.892391 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.927045 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.949645 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.967629 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:42 crc kubenswrapper[4958]: I1008 06:34:42.985923 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:42Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.015518 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.029790 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.037926 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.041052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.041116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.041136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.041290 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.048133 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.052420 4958 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.052915 4958 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.054592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.054669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.054690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.054718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.054738 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.072745 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.077830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.077879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.077891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.077915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.077930 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.080703 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.098590 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.104306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.104343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.104354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.104373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.104386 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.119481 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.123813 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.128875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.128909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.128923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.128967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.128985 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.146669 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.161014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.161113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.161137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.161166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.161188 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.164668 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.179073 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.179324 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.181640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.181692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.181710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.181738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.181757 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.199272 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.285319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.285375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.285393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.285421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.285437 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.389121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.389184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.389206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.389233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.389251 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.492508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.492580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.492599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.492628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.492690 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.576095 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.576167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.576121 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.576285 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.576434 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:43 crc kubenswrapper[4958]: E1008 06:34:43.576612 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.595526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.595572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.595591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.595617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.595634 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.699085 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.699145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.699162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.699186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.699204 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.803110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.803162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.803175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.803195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.803208 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.821817 4958 generic.go:334] "Generic (PLEG): container finished" podID="88f9eac3-839a-4b10-9668-a63915d5fe90" containerID="d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2" exitCode=0 Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.821917 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerDied","Data":"d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.842479 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.863715 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.881563 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.895613 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.906273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.906317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.906330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.906348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.906362 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:43Z","lastTransitionTime":"2025-10-08T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.911486 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.931783 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.948435 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.968672 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:43 crc kubenswrapper[4958]: I1008 06:34:43.993598 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:43Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.011349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.011387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.011397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.011411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.011421 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.025695 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.042249 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.080829 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.104218 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.126162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.126220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.126238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.126264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.126282 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.136252 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.172168 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.229070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.229106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.229114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.229128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.229140 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.332140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.332679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.332873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.333118 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.333267 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.435712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.435744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.435752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.435766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.435775 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.539047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.539109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.539131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.539161 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.539179 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.643479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.643549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.643572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.643599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.643620 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.747064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.747143 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.747169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.747195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.747212 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.833827 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.834312 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.836666 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.844991 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" event={"ID":"88f9eac3-839a-4b10-9668-a63915d5fe90","Type":"ContainerStarted","Data":"1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.853065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.853128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.853145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.853176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.853196 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.860943 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.887460 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.908648 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.911735 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.913609 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.929142 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.949055 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.956861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.956905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.956923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.956974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.956993 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:44Z","lastTransitionTime":"2025-10-08T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.967672 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:44 crc kubenswrapper[4958]: I1008 06:34:44.990224 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:44Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.012007 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.038542 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.061746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.061797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.061814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.061842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.061860 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.073855 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.098767 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.120866 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.142507 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.165428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.165484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.165501 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.165528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.165547 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.178831 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.186702 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:34:53.186674204 +0000 UTC m=+36.316366845 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.186541 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.186894 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.187130 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.187230 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:53.187208438 +0000 UTC m=+36.316901079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.187814 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.187904 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188085 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188117 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188140 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188162 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188199 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:53.188182643 +0000 UTC m=+36.317875274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188245 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:53.188221534 +0000 UTC m=+36.317914175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.188281 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188457 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188511 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188536 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.188646 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:53.188613385 +0000 UTC m=+36.318306026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.197031 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.217853 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.244241 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.267407 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.269094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.269179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.269223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.269252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.269270 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.285262 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.305405 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.327412 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.350342 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.372697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.372771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.372795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.372820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.372841 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.377755 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.401976 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.421103 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.458348 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.476020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.476115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.476134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.476164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.476205 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.482479 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.503866 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.526467 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.553980 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.576245 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.576303 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.576388 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.576538 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.577282 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:45 crc kubenswrapper[4958]: E1008 06:34:45.577487 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.579239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.579334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.579359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.579392 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.579425 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.683687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.683752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.683772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.683821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.683842 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.787067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.787124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.787134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.787154 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.787167 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.847936 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.890458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.890519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.890536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.890560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.890579 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.994358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.994414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.994434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.994460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:45 crc kubenswrapper[4958]: I1008 06:34:45.994478 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:45Z","lastTransitionTime":"2025-10-08T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.097839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.097887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.097907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.097931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.097975 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.200646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.200693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.200714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.200742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.200760 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.303776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.303840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.303857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.303883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.303901 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.407082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.407130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.407149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.407171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.407188 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.509733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.509777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.509793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.509817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.509834 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.613238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.613301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.613323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.613350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.613373 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.715792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.715838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.715856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.715878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.715896 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.817668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.817710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.817719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.817735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.817744 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.849723 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.919768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.919810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.919819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.919833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:46 crc kubenswrapper[4958]: I1008 06:34:46.919845 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:46Z","lastTransitionTime":"2025-10-08T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.022025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.022095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.022151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.022178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.022196 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.124840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.124905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.124927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.124977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.124995 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.227745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.227786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.227797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.227812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.227823 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.330712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.330760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.330768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.330783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.330794 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.433536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.433610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.433631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.433656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.433673 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.536493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.536555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.536574 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.536603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.536625 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.576278 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.576363 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.576291 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:47 crc kubenswrapper[4958]: E1008 06:34:47.576464 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:47 crc kubenswrapper[4958]: E1008 06:34:47.576588 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:47 crc kubenswrapper[4958]: E1008 06:34:47.576734 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.602314 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.624506 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.642428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.642494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.642518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.642547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.642570 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.643267 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.661663 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.679374 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.708542 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.721111 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.744666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.744714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.744728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.744748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.744764 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.755205 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.769233 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.788037 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.802845 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.821124 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.841327 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.851254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.851304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.851315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.851329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.851340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.852989 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/0.log" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.854874 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.858207 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7" exitCode=1 Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.858265 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.859328 4958 scope.go:117] "RemoveContainer" containerID="c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.864326 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.882546 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.900766 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.917378 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.931674 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.947400 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.953652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.953706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.953723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.953746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.953765 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:47Z","lastTransitionTime":"2025-10-08T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.965893 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:47 crc kubenswrapper[4958]: I1008 06:34:47.988859 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.003071 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.029184 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.043468 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.057309 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.063642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.063704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.063723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.063748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.063765 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.070753 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.083750 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.102109 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.112625 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.166560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.166603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.166616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.166634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.166645 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.269930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.270060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.270090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.270127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.270157 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.374098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.374167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.374186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.374214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.374233 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.477794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.477829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.477841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.477860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.477873 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.580817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.580882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.580899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.580927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.580980 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.684062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.684126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.684146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.684170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.684190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.787092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.787139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.787149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.787167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.787183 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.863576 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/0.log" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.893852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.893901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.893913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.893930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.893965 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.894417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043"} Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.894621 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.916261 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.930028 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.948864 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.958395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.972674 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.988050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:48Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.996878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.996963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.996978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.997007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:48 crc kubenswrapper[4958]: I1008 06:34:48.997025 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:48Z","lastTransitionTime":"2025-10-08T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.003606 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.017139 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.045598 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.065384 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.087916 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.103398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.103490 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.103519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.103556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.103592 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.116298 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.132098 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.146883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.175351 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.206908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.207000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.207020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.207045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.207063 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.310278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.310394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.310412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.310435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.310453 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.413036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.413249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.413350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.413413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.413476 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.516678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.516826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.516847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.517373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.517994 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.576198 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.576316 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:49 crc kubenswrapper[4958]: E1008 06:34:49.576360 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:49 crc kubenswrapper[4958]: E1008 06:34:49.576511 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.576761 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:49 crc kubenswrapper[4958]: E1008 06:34:49.576942 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.621294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.621340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.621357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.621380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.621397 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.724381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.724657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.724871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.725084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.725231 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.828611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.828883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.829054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.829469 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.829574 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.902200 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/1.log" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.903351 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/0.log" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.907545 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043" exitCode=1 Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.907603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.907681 4958 scope.go:117] "RemoveContainer" containerID="c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.908926 4958 scope.go:117] "RemoveContainer" containerID="98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043" Oct 08 06:34:49 crc kubenswrapper[4958]: E1008 06:34:49.909289 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.931583 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.932975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.933019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.933037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.933061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.933077 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:49Z","lastTransitionTime":"2025-10-08T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.950260 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:49 crc kubenswrapper[4958]: I1008 06:34:49.984509 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:49Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.039005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.039067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.039080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.039105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.039117 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.044161 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.054724 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.070989 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.086212 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.101576 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.118805 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.142311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.142380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.142399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.142424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.142442 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.145486 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.168939 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.187815 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.205612 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.236008 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.246103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.246162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.246180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.246203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.246219 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.251610 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.349936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.350017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.350039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.350060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.350077 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.452726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.452784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.452804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.452828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.452846 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.555766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.555835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.555859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.555887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.555908 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.658862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.658935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.658987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.659021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.659049 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.761209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.761266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.761283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.761306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.761325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.789786 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr"] Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.790643 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.794065 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.794557 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.808718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.829277 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.850062 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.852632 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f82c7807-9153-4a51-a7c5-b83991a177e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.852711 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f82c7807-9153-4a51-a7c5-b83991a177e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.852752 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbvw\" (UniqueName: \"kubernetes.io/projected/f82c7807-9153-4a51-a7c5-b83991a177e7-kube-api-access-fgbvw\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.852849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f82c7807-9153-4a51-a7c5-b83991a177e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.864717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.864766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.864784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.864808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.864829 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.868544 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.888892 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.914639 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/1.log" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.918283 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.937713 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.954437 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f82c7807-9153-4a51-a7c5-b83991a177e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.954533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f82c7807-9153-4a51-a7c5-b83991a177e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.954584 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f82c7807-9153-4a51-a7c5-b83991a177e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.954615 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgbvw\" (UniqueName: \"kubernetes.io/projected/f82c7807-9153-4a51-a7c5-b83991a177e7-kube-api-access-fgbvw\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.955718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f82c7807-9153-4a51-a7c5-b83991a177e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.955769 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f82c7807-9153-4a51-a7c5-b83991a177e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.960275 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.968912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.968968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.968981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.968999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.969011 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:50Z","lastTransitionTime":"2025-10-08T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.971389 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f82c7807-9153-4a51-a7c5-b83991a177e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.979917 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgbvw\" (UniqueName: \"kubernetes.io/projected/f82c7807-9153-4a51-a7c5-b83991a177e7-kube-api-access-fgbvw\") pod \"ovnkube-control-plane-749d76644c-xrpdr\" (UID: \"f82c7807-9153-4a51-a7c5-b83991a177e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.981396 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:50 crc kubenswrapper[4958]: I1008 06:34:50.998036 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:50Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.015454 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.038684 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.056087 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.074416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.074528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.074553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.074585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.074614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.089280 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.110671 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.111197 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.133769 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.177363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.177407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.177423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.177448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.177465 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.279999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.280044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.280061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.280083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.280099 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.383006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.383057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.383071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.383091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.383105 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.485731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.485777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.485788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.485808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.485822 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.542635 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xbfbp"] Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.543378 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:51 crc kubenswrapper[4958]: E1008 06:34:51.543525 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.571242 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.572475 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.577303 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:51 crc kubenswrapper[4958]: E1008 06:34:51.577473 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.578021 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:51 crc kubenswrapper[4958]: E1008 06:34:51.578122 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.578352 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:51 crc kubenswrapper[4958]: E1008 06:34:51.578463 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.588909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.589007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.589027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.589055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.589076 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.597474 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.614835 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.631001 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.648097 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.663357 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.663420 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds25\" (UniqueName: \"kubernetes.io/projected/3776a5a1-bd0d-42af-9226-7251ee6b8788-kube-api-access-6ds25\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.667636 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.690993 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.692845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.692895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.692913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.692938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.692994 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.709390 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.726081 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.744858 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.757241 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.764223 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.764293 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ds25\" (UniqueName: \"kubernetes.io/projected/3776a5a1-bd0d-42af-9226-7251ee6b8788-kube-api-access-6ds25\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:51 crc kubenswrapper[4958]: E1008 06:34:51.764439 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:51 crc kubenswrapper[4958]: E1008 06:34:51.764545 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:52.264521908 +0000 UTC m=+35.394214519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.771535 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.793028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ds25\" (UniqueName: \"kubernetes.io/projected/3776a5a1-bd0d-42af-9226-7251ee6b8788-kube-api-access-6ds25\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.796029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.796093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.796111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.796135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.796153 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.803665 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.827085 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.846014 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.864212 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.895665 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.898783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.898851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.898869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.898898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.898917 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:51Z","lastTransitionTime":"2025-10-08T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.917108 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.926685 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" event={"ID":"f82c7807-9153-4a51-a7c5-b83991a177e7","Type":"ContainerStarted","Data":"e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.926750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" event={"ID":"f82c7807-9153-4a51-a7c5-b83991a177e7","Type":"ContainerStarted","Data":"65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.926771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" event={"ID":"f82c7807-9153-4a51-a7c5-b83991a177e7","Type":"ContainerStarted","Data":"3206bdc28a1e6eaccbef6cccc03dd513f19dc08954d2078778d8b8454921baf7"} Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.936922 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.957189 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:51 crc kubenswrapper[4958]: I1008 06:34:51.989618 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:51Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.002063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.002126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.002150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.002183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.002206 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.009763 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.047181 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.068053 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.086107 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.102629 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.104399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.104449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.104465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.104497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.104510 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.122191 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.142028 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.158803 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.179360 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.198442 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.207508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.207538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.207549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.207564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.207577 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.232415 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.248417 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.261364 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.275859 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:52 crc kubenswrapper[4958]: E1008 06:34:52.276077 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:52 crc kubenswrapper[4958]: E1008 06:34:52.276176 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:53.276152745 +0000 UTC m=+36.405845516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.284560 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.299393 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.310130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.310250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.310315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.310389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.310468 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.311237 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.322775 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.341162 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.352315 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.368423 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.383357 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.397045 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.409882 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.413577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.413639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.413659 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.413684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.413702 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.424531 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.441653 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.461435 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.480675 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.497914 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.516472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.516751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.516874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.517044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.517181 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.520303 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.538249 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:52Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.621719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.621774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.621791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.621818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.621837 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.725097 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.725178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.725202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.725236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.725262 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.828563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.828918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.829146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.829330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.829532 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.932178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.932238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.932255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.932278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:52 crc kubenswrapper[4958]: I1008 06:34:52.932296 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:52Z","lastTransitionTime":"2025-10-08T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.034998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.035431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.035604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.035734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.035851 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.139217 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.139321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.139340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.139365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.139416 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.243518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.243590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.243608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.243634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.243653 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.287404 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.287564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.287620 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.287660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.287704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.287763 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:35:09.287713192 +0000 UTC m=+52.417405833 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.287881 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.287911 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.287930 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288031 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:09.28800862 +0000 UTC m=+52.417701261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288153 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288249 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:09.288226205 +0000 UTC m=+52.417919026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288250 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.287863 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288327 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:09.288307067 +0000 UTC m=+52.417999988 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288360 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288373 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288426 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288448 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288449 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:55.28842583 +0000 UTC m=+38.418118471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.288580 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:09.288522803 +0000 UTC m=+52.418215434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.347274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.347334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.347351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.347379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.347397 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.450685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.451050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.451199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.451343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.451561 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.491252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.491343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.491370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.491412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.491435 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.514791 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:53Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.520469 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.520671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.520829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.520999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.521178 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.541060 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:53Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.545536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.545741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.545866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.546048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.546177 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.568515 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:53Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.573593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.573647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.573664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.573688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.573706 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.576238 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.576271 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.576238 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.576769 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.576505 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.577186 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.576441 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.576805 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.595861 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:53Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.601256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.601334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.601359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.601390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.601412 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.627915 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:53Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:53 crc kubenswrapper[4958]: E1008 06:34:53.628234 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.631276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.631335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.631354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.631381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.631399 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.734348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.734419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.734458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.734499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.734523 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.838355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.838429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.838452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.838481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.838503 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.941859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.941920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.941936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.941998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:53 crc kubenswrapper[4958]: I1008 06:34:53.942017 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:53Z","lastTransitionTime":"2025-10-08T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.045526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.045588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.045615 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.045647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.045672 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.149283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.149385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.149404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.149429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.149451 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.255772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.255845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.255898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.255931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.255981 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.358553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.358611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.358630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.358651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.358669 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.462237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.462308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.462328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.462354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.462377 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.566395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.566464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.566482 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.566509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.566529 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.670437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.670835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.671035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.671196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.671360 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.774985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.775036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.775056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.775081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.775097 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.878585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.878648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.878666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.878688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.878709 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.981816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.981883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.981900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.981926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:54 crc kubenswrapper[4958]: I1008 06:34:54.981942 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:54Z","lastTransitionTime":"2025-10-08T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.085210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.085275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.085301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.085330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.085348 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.188300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.188357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.188376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.188402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.188419 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.291096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.291163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.291181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.291206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.291222 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.311601 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:55 crc kubenswrapper[4958]: E1008 06:34:55.311781 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:55 crc kubenswrapper[4958]: E1008 06:34:55.311868 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:34:59.311843001 +0000 UTC m=+42.441535642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.393916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.394040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.394060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.394093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.394110 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.497178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.497244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.497263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.497288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.497305 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.575696 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.575760 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:55 crc kubenswrapper[4958]: E1008 06:34:55.575846 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.575783 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:55 crc kubenswrapper[4958]: E1008 06:34:55.575984 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.576057 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:55 crc kubenswrapper[4958]: E1008 06:34:55.576159 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:55 crc kubenswrapper[4958]: E1008 06:34:55.577511 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.599680 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.599730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.599759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.599777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.599791 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.702418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.702489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.702535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.702561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.702578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.808332 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.809293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.809333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.809381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.809406 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.913106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.913164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.913187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.913216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:55 crc kubenswrapper[4958]: I1008 06:34:55.913238 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:55Z","lastTransitionTime":"2025-10-08T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.015548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.015618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.015669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.015695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.015715 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.118865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.118935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.119000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.119036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.119059 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.222117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.222205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.222230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.222262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.222284 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.325640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.325710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.325727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.325756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.325774 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.428793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.428884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.428908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.428940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.428999 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.532722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.532778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.532795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.532822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.532840 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.635695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.635783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.635808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.635844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.635866 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.739016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.739157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.739180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.739203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.739220 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.842274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.842335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.842351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.842375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.842393 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.946044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.946083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.946093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.946106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:56 crc kubenswrapper[4958]: I1008 06:34:56.946115 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:56Z","lastTransitionTime":"2025-10-08T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.049394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.049473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.049493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.049518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.049535 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.152829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.152906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.152933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.152993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.153018 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.257062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.257159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.257183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.257209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.257229 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.359432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.359470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.359481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.359496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.359507 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.462656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.462692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.462706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.462722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.462733 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.566473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.566544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.566568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.566598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.566620 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.576056 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.576156 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:57 crc kubenswrapper[4958]: E1008 06:34:57.576245 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.576326 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.576765 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:57 crc kubenswrapper[4958]: E1008 06:34:57.576716 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:57 crc kubenswrapper[4958]: E1008 06:34:57.577000 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:34:57 crc kubenswrapper[4958]: E1008 06:34:57.577205 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.595062 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.615985 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.645230 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c87d884e4f0a0105c17a0fee0a197aac2e8ae88f703310f920b6e1f35c539df7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:47Z\\\",\\\"message\\\":\\\"essqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405153 6265 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405499 6265 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:47.405608 6265 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405701 6265 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.405850 6265 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406045 6265 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406272 6265 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.406636 6265 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:47.407123 6265 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.661403 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.671037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.671133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.671159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.671190 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.671212 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.696883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.719798 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.737192 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.758462 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.775412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.775478 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.775495 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.775524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.775544 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.780794 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.797281 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.819793 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.838640 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.865543 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.880367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.880395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.880405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.880418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.880427 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.882577 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.911697 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.933074 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.949915 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:34:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.983304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.983354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.983371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.983393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:57 crc kubenswrapper[4958]: I1008 06:34:57.983411 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:57Z","lastTransitionTime":"2025-10-08T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.086070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.086121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.086138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.086161 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.086179 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.189434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.189509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.189526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.189551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.189566 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.292243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.292419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.292447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.292482 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.292504 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.395314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.395408 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.395426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.395450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.395467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.498221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.498274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.498294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.498318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.498335 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.602011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.602093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.602111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.602140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.602182 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.705381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.705452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.705462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.705480 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.705495 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.808246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.808318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.808336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.808362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.808379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.911816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.911889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.911917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.911988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:58 crc kubenswrapper[4958]: I1008 06:34:58.912015 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:58Z","lastTransitionTime":"2025-10-08T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.015263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.015313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.015324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.015342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.015355 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.118216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.118282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.118299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.118325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.118342 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.221457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.221519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.221538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.221560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.221578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.324701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.324778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.324795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.324820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.324837 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.361768 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:59 crc kubenswrapper[4958]: E1008 06:34:59.362023 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:59 crc kubenswrapper[4958]: E1008 06:34:59.362129 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:07.362100274 +0000 UTC m=+50.491792915 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.428059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.428130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.428152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.428181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.428205 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.530746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.530897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.530922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.531001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.531028 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.575906 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.576027 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.576030 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.576179 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:34:59 crc kubenswrapper[4958]: E1008 06:34:59.576593 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:34:59 crc kubenswrapper[4958]: E1008 06:34:59.576732 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:34:59 crc kubenswrapper[4958]: E1008 06:34:59.576895 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:34:59 crc kubenswrapper[4958]: E1008 06:34:59.577061 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.633812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.633860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.633877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.633900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.633917 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.737813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.737887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.737910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.737939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.738016 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.841877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.841935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.841985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.842008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.842026 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.945146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.945212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.945229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.945255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:34:59 crc kubenswrapper[4958]: I1008 06:34:59.945274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:34:59Z","lastTransitionTime":"2025-10-08T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.048437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.048506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.048534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.048564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.048587 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.151613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.151700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.151723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.151754 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.151778 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.254351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.254417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.254442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.254468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.254490 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.357430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.357502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.357525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.357552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.357575 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.461092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.461160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.461181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.461214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.461237 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.563240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.563301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.563317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.563341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.563357 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.665730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.665793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.665809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.665834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.665851 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.769978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.770039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.770057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.770084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.770100 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.873487 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.873556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.873575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.873600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.873619 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.976350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.976418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.976441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.976467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:00 crc kubenswrapper[4958]: I1008 06:35:00.976489 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:00Z","lastTransitionTime":"2025-10-08T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.080579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.080644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.080707 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.080732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.080750 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.184046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.184114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.184128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.184150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.184165 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.293202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.293359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.293468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.293637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.293671 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.398562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.398647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.398667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.398702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.398728 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.501629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.501679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.501694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.501719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.501735 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.576074 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.576162 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:01 crc kubenswrapper[4958]: E1008 06:35:01.576245 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.576176 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.576394 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:01 crc kubenswrapper[4958]: E1008 06:35:01.576548 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:01 crc kubenswrapper[4958]: E1008 06:35:01.576899 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:01 crc kubenswrapper[4958]: E1008 06:35:01.577852 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.604819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.604906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.604926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.604981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.604999 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.708193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.708269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.708287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.708314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.708334 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.811425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.811847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.811892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.811917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.811936 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.930379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.930455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.930477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.930505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:01 crc kubenswrapper[4958]: I1008 06:35:01.930525 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:01Z","lastTransitionTime":"2025-10-08T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.033544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.033597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.033619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.033644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.033660 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.137016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.137064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.137107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.137129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.137147 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.240261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.240335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.240355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.240388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.240405 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.310453 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.311370 4958 scope.go:117] "RemoveContainer" containerID="98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.335060 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.346625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.346675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.346689 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.346711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.346726 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.371160 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.390617 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.424532 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.442713 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.449500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.449555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.449570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.449593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.449609 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.461244 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.478479 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.497122 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.513495 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.529273 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.542847 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.552384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.552435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.552450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.552472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.552487 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.562609 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.578538 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.592463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.611925 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.632846 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.651593 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.655642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.655691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.655708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.655733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.655750 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.758512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.758590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.758603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.758619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.758630 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.861128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.861178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.861192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.861209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.861222 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.964325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.964380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.964395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.964416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.964430 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:02Z","lastTransitionTime":"2025-10-08T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.968582 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/1.log" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.971559 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec"} Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.972224 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:35:02 crc kubenswrapper[4958]: I1008 06:35:02.991164 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:02Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.006933 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.021579 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.034512 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.045593 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.063077 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.066988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.067059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.067077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.067101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.067120 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.078816 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.097252 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.118158 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.135796 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.157477 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.187209 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.201219 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.230840 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.249339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.249396 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.249414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.249437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.249458 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.249647 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.268352 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.289564 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.352152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.352216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.352236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.352263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.352280 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.455166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.455237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.455254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.455280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.455300 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.558695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.558751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.558762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.558784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.558801 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.576138 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.576160 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.576157 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.576272 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.576307 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.576343 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.576489 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.576693 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.662251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.662324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.662334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.662353 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.662365 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.765403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.765980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.766182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.766324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.766522 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.840809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.840900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.840919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.840944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.841007 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.862045 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.868160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.868280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.868305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.868341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.868359 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.888821 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.894000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.894074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.894100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.894131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.894155 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.915350 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.921603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.921665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.921684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.921710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.921729 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.942654 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.947884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.947983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.948014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.948043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.948062 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.966075 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:03Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.966631 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.969177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.969245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.969264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.969292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.969312 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:03Z","lastTransitionTime":"2025-10-08T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.978589 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/2.log" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.979761 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/1.log" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.985171 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec" exitCode=1 Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.985230 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec"} Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.985279 4958 scope.go:117] "RemoveContainer" containerID="98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043" Oct 08 06:35:03 crc kubenswrapper[4958]: I1008 06:35:03.986343 4958 scope.go:117] "RemoveContainer" containerID="5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec" Oct 08 06:35:03 crc kubenswrapper[4958]: E1008 06:35:03.986686 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.011598 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.028571 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.048108 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.072968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.073384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.073539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.073640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.073732 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.076034 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.096087 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.115557 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.141176 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.157167 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.173533 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.177678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.177761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.177787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.177823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.177847 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.194017 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.213878 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.234533 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.254316 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.278749 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98cb343645f85058d7a538a2ed95c915cdd1b2a8ae241340910f9e891c258043\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:34:49Z\\\",\\\"message\\\":\\\"ler 3\\\\nI1008 06:34:49.034471 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI1008 06:34:49.033007 6381 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1008 06:34:49.034485 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI1008 06:34:49.034452 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1008 06:34:49.033051 6381 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1008 06:34:49.034604 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1008 06:34:49.034663 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1008 06:34:49.034672 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1008 06:34:49.034482 6381 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1008 06:34:49.034708 6381 factory.go:656] Stopping watch factory\\\\nI1008 06:34:49.034732 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1008 06:34:49.034747 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1008 06:34:49.034771 6381 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1008 06:34:49.034815 6381 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.281857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.281894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.281906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.281924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.281937 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.294640 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.330647 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.352529 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:04Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.390245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.390317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.390337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.390363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.390380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.492833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.492926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.492955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.492978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.492989 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.595908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.596014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.596040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.596069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.596092 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.699146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.699211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.699228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.699255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.699271 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.802503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.802546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.802557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.802576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.802588 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.905011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.905049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.905058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.905072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.905084 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:04Z","lastTransitionTime":"2025-10-08T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.992581 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/2.log" Oct 08 06:35:04 crc kubenswrapper[4958]: I1008 06:35:04.998618 4958 scope.go:117] "RemoveContainer" containerID="5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec" Oct 08 06:35:04 crc kubenswrapper[4958]: E1008 06:35:04.999158 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.012379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.012431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.012448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.012473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.012491 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.012661 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.046786 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.068835 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.086692 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.101905 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.115369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.115467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.115491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.115520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.115539 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.132919 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.150926 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.169537 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.182327 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.193108 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.208373 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.218603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.218653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.218671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.218693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.218710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.223009 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.241810 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.258363 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.273931 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.292241 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.303019 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:05Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.321491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.321535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.321547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.321563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.321573 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.425180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.425257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.425275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.425303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.425322 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.528503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.529038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.529239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.529437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.529607 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.576035 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.576117 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.576072 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.576062 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:05 crc kubenswrapper[4958]: E1008 06:35:05.576229 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:05 crc kubenswrapper[4958]: E1008 06:35:05.576366 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:05 crc kubenswrapper[4958]: E1008 06:35:05.576540 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:05 crc kubenswrapper[4958]: E1008 06:35:05.576729 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.634156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.634232 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.634261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.634294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.634322 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.737936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.738024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.738042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.738067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.738089 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.840983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.841060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.841081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.841109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.841127 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.944445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.944510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.944521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.944537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:05 crc kubenswrapper[4958]: I1008 06:35:05.944546 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:05Z","lastTransitionTime":"2025-10-08T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.047491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.047556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.047574 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.047599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.047617 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.151402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.151464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.151482 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.151507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.151525 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.255520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.255604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.255628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.255661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.255686 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.358527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.358596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.358620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.358649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.358672 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.461751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.461815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.461832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.461854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.461872 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.564772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.564833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.564851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.564878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.564897 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.668240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.668314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.668336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.668373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.668399 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.772078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.772148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.772167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.772190 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.772207 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.875398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.875462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.875480 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.875505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.875526 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.978146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.978210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.978230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.978257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:06 crc kubenswrapper[4958]: I1008 06:35:06.978274 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:06Z","lastTransitionTime":"2025-10-08T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.080878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.081001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.081021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.081046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.081063 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.185736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.185815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.185847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.185876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.185894 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.288318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.288369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.288386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.288413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.288429 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.387511 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:07 crc kubenswrapper[4958]: E1008 06:35:07.387786 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:35:07 crc kubenswrapper[4958]: E1008 06:35:07.387912 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:23.387880496 +0000 UTC m=+66.517573147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.391666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.391739 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.391761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.391791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.391814 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.494467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.494520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.494539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.494561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.494578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.575825 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.575880 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.575930 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:07 crc kubenswrapper[4958]: E1008 06:35:07.576071 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.576093 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:07 crc kubenswrapper[4958]: E1008 06:35:07.576208 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:07 crc kubenswrapper[4958]: E1008 06:35:07.576373 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:07 crc kubenswrapper[4958]: E1008 06:35:07.576427 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.597355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.597428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.597446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.597468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.597485 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.602861 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.619623 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.635279 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.652869 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.670245 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.701225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.701296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.701313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.701343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.701360 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.703472 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.728808 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.761380 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.775615 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.801378 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.806122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.806188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.806206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.806233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.806253 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.832154 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.847687 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.870662 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.891113 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.910150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.910187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.910198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.910170 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.910215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.910430 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:07Z","lastTransitionTime":"2025-10-08T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.926707 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:07 crc kubenswrapper[4958]: I1008 06:35:07.940922 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:07Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.013104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.013166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.013189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.013221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.013243 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.116752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.116825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.116845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.116870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.116889 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.219911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.220071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.220099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.220132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.220157 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.324050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.324134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.324153 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.324184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.324204 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.427709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.427782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.427805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.427833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.427853 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.531463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.531533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.531551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.531575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.531593 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.635020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.635107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.635132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.635164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.635183 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.739368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.739451 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.739474 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.739497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.739514 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.842243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.842299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.842316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.842344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.842361 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.945618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.945704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.945729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.945762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:08 crc kubenswrapper[4958]: I1008 06:35:08.945786 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:08Z","lastTransitionTime":"2025-10-08T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.048822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.048893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.048910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.048937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.048991 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.152112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.152179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.152198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.152232 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.152251 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.255437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.255516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.255534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.255560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.255581 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.310254 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.310470 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.310513 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:35:41.310468982 +0000 UTC m=+84.440161643 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.310621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.310708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.310800 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.310797 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.310900 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.310975 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.310981 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.310997 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.311006 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.311011 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.311119 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:41.311092738 +0000 UTC m=+84.440785369 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.311168 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.311189 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:41.311134429 +0000 UTC m=+84.440827060 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.311827 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:41.311529029 +0000 UTC m=+84.441221670 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.317692 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:41.31764192 +0000 UTC m=+84.447334551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.359484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.359561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.359583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.359612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.359636 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.462840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.462912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.462931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.463018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.463036 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.566468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.566528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.566545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.566567 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.566584 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.576091 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.576122 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.576275 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.576374 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.576443 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.576562 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.576648 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:09 crc kubenswrapper[4958]: E1008 06:35:09.576725 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.669376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.669431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.669448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.669471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.669488 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.772940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.773048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.773066 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.773091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.773108 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.876531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.876599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.876622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.876652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.876675 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.979914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.980009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.980031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.980055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:09 crc kubenswrapper[4958]: I1008 06:35:09.980073 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:09Z","lastTransitionTime":"2025-10-08T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.082825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.082905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.082929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.082988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.083011 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.186794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.186857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.186881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.186911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.186933 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.290119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.290192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.290210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.290236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.290253 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.393674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.393741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.393760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.393793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.393816 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.496537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.496637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.496689 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.496713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.496730 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.599282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.599345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.599355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.599369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.599379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.702687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.702738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.702752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.702769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.702779 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.806130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.806256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.806286 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.806325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.806353 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.887058 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.900652 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.910645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.910818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.910848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.910881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.910913 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:10Z","lastTransitionTime":"2025-10-08T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.913883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:10Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.933801 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:10Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.949118 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:10Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.966808 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:10Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:10 crc kubenswrapper[4958]: I1008 06:35:10.984385 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:10Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.004079 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.013236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.013270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.013280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.013295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.013306 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.019267 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.038616 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.050467 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.072775 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.094670 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.109411 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.116881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.116933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.116957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.116973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.116984 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.130749 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.145567 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.161404 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.178171 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.193726 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:11Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.219910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.220009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.220037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.220068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.220090 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.322989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.323056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.323092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.323130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.323154 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.426204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.426278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.426302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.426329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.426352 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.529118 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.529205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.529228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.529258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.529279 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.575725 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.575769 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.575862 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:11 crc kubenswrapper[4958]: E1008 06:35:11.576149 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.576223 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:11 crc kubenswrapper[4958]: E1008 06:35:11.576420 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:11 crc kubenswrapper[4958]: E1008 06:35:11.576595 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:11 crc kubenswrapper[4958]: E1008 06:35:11.576717 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.632712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.632767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.632785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.632807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.632826 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.736447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.736551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.736568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.736593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.736610 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.839559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.839616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.839627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.839645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.839659 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.944723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.944788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.944805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.944831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:11 crc kubenswrapper[4958]: I1008 06:35:11.944851 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:11Z","lastTransitionTime":"2025-10-08T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.048326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.048400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.048419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.048481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.048500 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.151588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.151670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.151687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.151713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.151735 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.255301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.255373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.255394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.255421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.255440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.358699 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.358760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.358776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.358800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.358837 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.462234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.462315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.462336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.462362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.462380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.566363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.566444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.566461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.566491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.566508 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.670031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.670096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.670114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.670139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.670156 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.773686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.773744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.773753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.773770 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.773779 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.876509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.876565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.876582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.876609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.876626 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.978726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.978796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.978821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.978854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:12 crc kubenswrapper[4958]: I1008 06:35:12.978879 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:12Z","lastTransitionTime":"2025-10-08T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.082242 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.082313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.082331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.082362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.082382 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.185810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.185878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.185899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.185926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.185977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.288936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.289045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.289068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.289099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.289120 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.397247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.397309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.397370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.397404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.397429 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.501053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.501112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.501129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.501154 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.501172 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.576514 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.576594 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:13 crc kubenswrapper[4958]: E1008 06:35:13.576719 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.576820 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:13 crc kubenswrapper[4958]: E1008 06:35:13.576918 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:13 crc kubenswrapper[4958]: E1008 06:35:13.577042 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.577118 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:13 crc kubenswrapper[4958]: E1008 06:35:13.577254 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.604289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.604348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.604373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.604400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.604421 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.707425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.707494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.707513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.707542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.707562 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.810735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.811354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.811458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.811571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.811663 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.915249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.915322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.915347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.915378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:13 crc kubenswrapper[4958]: I1008 06:35:13.915397 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:13Z","lastTransitionTime":"2025-10-08T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.018264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.018351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.018369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.018397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.018417 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.121390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.121492 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.121514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.121544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.121561 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.134025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.134092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.134113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.134138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.134156 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: E1008 06:35:14.156158 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:14Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.161279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.161342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.161362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.161387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.161405 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: E1008 06:35:14.182475 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:14Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.187513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.187586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.187610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.187642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.187669 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: E1008 06:35:14.207808 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:14Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.212657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.212711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.212729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.212754 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.212772 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: E1008 06:35:14.233497 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:14Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.238054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.238111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.238130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.238154 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.238170 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: E1008 06:35:14.260332 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:14Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:14 crc kubenswrapper[4958]: E1008 06:35:14.260627 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.263042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.263089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.263106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.263129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.263145 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.366134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.366194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.366211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.366236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.366252 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.469477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.469512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.469524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.469537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.469546 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.572356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.572407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.572423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.572448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.572468 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.677558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.678609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.678642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.678667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.678683 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.782243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.782325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.782342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.782368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.782385 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.885030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.885100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.885118 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.885145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.885163 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.988249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.988306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.988323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.988347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:14 crc kubenswrapper[4958]: I1008 06:35:14.988364 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:14Z","lastTransitionTime":"2025-10-08T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.091546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.091612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.091631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.091657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.091675 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.194704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.194768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.194786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.194811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.194832 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.298999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.299060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.299076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.299101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.299120 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.401990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.402055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.402071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.402099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.402116 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.505089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.505157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.505175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.505203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.505221 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.576295 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.576378 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:15 crc kubenswrapper[4958]: E1008 06:35:15.576512 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.576553 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:15 crc kubenswrapper[4958]: E1008 06:35:15.576674 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.576766 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:15 crc kubenswrapper[4958]: E1008 06:35:15.576818 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:15 crc kubenswrapper[4958]: E1008 06:35:15.577011 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.608001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.608103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.608126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.608150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.608167 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.711425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.711489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.711508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.711532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.711552 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.815167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.815238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.815255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.815279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.815296 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.918484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.918542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.918559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.918584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:15 crc kubenswrapper[4958]: I1008 06:35:15.918601 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:15Z","lastTransitionTime":"2025-10-08T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.021782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.021847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.021865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.021892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.021910 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.124856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.124926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.124984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.125017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.125041 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.227928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.227999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.228011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.228034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.228047 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.331027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.331088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.331105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.331128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.331147 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.434913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.435011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.435029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.435053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.435071 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.538789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.538843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.538860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.538883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.538900 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.642626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.642689 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.642703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.642728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.642742 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.746416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.746456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.746465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.746477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.746485 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.848919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.849015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.849033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.849060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.849078 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.952360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.952410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.952426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.952453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:16 crc kubenswrapper[4958]: I1008 06:35:16.952471 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:16Z","lastTransitionTime":"2025-10-08T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.055470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.055528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.055545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.055569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.055587 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.158837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.158897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.158914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.158938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.158991 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.262452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.262536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.262587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.262625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.262652 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.366407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.366482 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.366507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.366539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.366556 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.469544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.469596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.469613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.469637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.469655 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.574170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.574225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.574248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.574275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.574293 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.575552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.575570 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.575642 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.577152 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:17 crc kubenswrapper[4958]: E1008 06:35:17.577321 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:17 crc kubenswrapper[4958]: E1008 06:35:17.577585 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:17 crc kubenswrapper[4958]: E1008 06:35:17.577692 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:17 crc kubenswrapper[4958]: E1008 06:35:17.577447 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.601865 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.626889 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.650501 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.668514 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.678107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.678159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.678179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.678206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.678224 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.688797 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.710285 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.731426 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.753073 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.776651 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.781817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.782080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.782235 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.782385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.782526 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.794563 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.811865 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.849299 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.872716 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.885350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.885411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.885424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.885441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.885455 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.890409 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.909327 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.929521 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.959054 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.973606 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:17Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.988171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.988205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.988214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.988230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:17 crc kubenswrapper[4958]: I1008 06:35:17.988256 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:17Z","lastTransitionTime":"2025-10-08T06:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.092360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.092430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.092446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.092470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.092486 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.195244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.195300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.195318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.195347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.195370 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.298583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.298645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.298662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.298688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.298706 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.402224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.402289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.402305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.402330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.402346 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.505736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.505803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.505826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.505857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.505879 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.577286 4958 scope.go:117] "RemoveContainer" containerID="5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec" Oct 08 06:35:18 crc kubenswrapper[4958]: E1008 06:35:18.577544 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.609376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.609419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.609436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.609456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.609472 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.712350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.712412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.712432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.712456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.712474 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.814850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.814908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.814925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.814977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.814996 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.918693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.918793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.918813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.918850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:18 crc kubenswrapper[4958]: I1008 06:35:18.918871 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:18Z","lastTransitionTime":"2025-10-08T06:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.021921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.022031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.022055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.022087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.022111 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.124719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.124790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.124806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.124831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.124849 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.228268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.228347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.228370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.228395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.228411 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.339015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.339105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.339129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.339163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.339190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.443159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.443260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.443282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.443307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.443324 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.546141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.546221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.546245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.546277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.546298 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.578279 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.578602 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.578359 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:19 crc kubenswrapper[4958]: E1008 06:35:19.578796 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.578364 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:19 crc kubenswrapper[4958]: E1008 06:35:19.579009 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:19 crc kubenswrapper[4958]: E1008 06:35:19.579131 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:19 crc kubenswrapper[4958]: E1008 06:35:19.579436 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.649608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.649833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.649857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.649885 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.649906 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.753076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.753585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.753923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.754133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.754264 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.858112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.859001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.859254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.859454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.859809 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.962467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.962704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.962829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.963016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:19 crc kubenswrapper[4958]: I1008 06:35:19.963140 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:19Z","lastTransitionTime":"2025-10-08T06:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.066524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.066586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.066603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.066626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.066646 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.169664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.169783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.169800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.169824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.169840 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.272862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.273212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.273402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.273544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.273699 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.376987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.377416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.377619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.377856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.378085 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.481661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.481693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.481702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.481717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.481726 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.585186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.585220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.585229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.585243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.585254 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.689492 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.689920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.690378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.690664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.690918 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.794308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.794748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.795161 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.795472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.795792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.899093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.900120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.900176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.900206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:20 crc kubenswrapper[4958]: I1008 06:35:20.900226 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:20Z","lastTransitionTime":"2025-10-08T06:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.002736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.002774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.002788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.002803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.002816 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.105389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.105433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.105448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.105472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.105488 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.208042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.208098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.208108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.208124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.208135 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.310681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.310740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.310756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.310780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.310797 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.413687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.413746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.413762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.413785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.413801 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.516875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.517148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.517160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.517214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.517236 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.575423 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.575456 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.575471 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:21 crc kubenswrapper[4958]: E1008 06:35:21.575575 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.575620 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:21 crc kubenswrapper[4958]: E1008 06:35:21.575680 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:21 crc kubenswrapper[4958]: E1008 06:35:21.575815 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:21 crc kubenswrapper[4958]: E1008 06:35:21.575846 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.619073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.619110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.619119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.619139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.619149 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.722051 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.722115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.722133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.722157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.722177 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.825634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.825692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.825709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.825731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.825752 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.928151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.928212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.928229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.928252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:21 crc kubenswrapper[4958]: I1008 06:35:21.928269 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:21Z","lastTransitionTime":"2025-10-08T06:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.031164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.031201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.031209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.031221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.031231 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.133896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.133984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.134002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.134029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.134053 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.236714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.236795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.236821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.236853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.236876 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.339783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.339851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.339876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.339906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.339927 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.442567 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.442667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.442678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.442689 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.442696 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.544868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.544929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.544974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.544999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.545014 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.648175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.648239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.648256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.648285 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.648303 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.750848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.750930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.750986 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.751018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.751041 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.853875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.853937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.853988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.854015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.854032 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.957265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.957337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.957357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.957386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:22 crc kubenswrapper[4958]: I1008 06:35:22.957405 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:22Z","lastTransitionTime":"2025-10-08T06:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.061082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.061119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.061128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.061142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.061152 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.162914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.163003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.163021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.163046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.163065 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.265829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.265884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.265901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.265927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.265970 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.368677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.368724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.368737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.368757 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.368769 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.471205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.471272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.471290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.471314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.471331 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.481962 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:23 crc kubenswrapper[4958]: E1008 06:35:23.482147 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:35:23 crc kubenswrapper[4958]: E1008 06:35:23.482245 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:35:55.482215266 +0000 UTC m=+98.611907907 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.574499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.574579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.574603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.574632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.574651 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.576154 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.576200 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:23 crc kubenswrapper[4958]: E1008 06:35:23.576294 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.576309 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.576334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:23 crc kubenswrapper[4958]: E1008 06:35:23.576450 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:23 crc kubenswrapper[4958]: E1008 06:35:23.576514 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:23 crc kubenswrapper[4958]: E1008 06:35:23.576579 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.677740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.677793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.677808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.677830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.677844 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.780591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.780630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.780641 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.780656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.780670 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.883305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.883348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.883356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.883372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.883380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.986775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.986827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.986843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.986868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:23 crc kubenswrapper[4958]: I1008 06:35:23.986886 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:23Z","lastTransitionTime":"2025-10-08T06:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.089107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.089147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.089158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.089172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.089182 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.192939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.193014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.193032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.193055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.193071 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.295477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.295761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.295914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.296142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.296310 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.398626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.398696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.398714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.398742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.398768 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.501467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.501881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.502062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.502206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.502365 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.589388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.589449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.589472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.589499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.589519 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: E1008 06:35:24.609158 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:24Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.613906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.614001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.614025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.614055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.614081 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: E1008 06:35:24.630731 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:24Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.634799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.634854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.634871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.634894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.634912 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: E1008 06:35:24.647075 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:24Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.650748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.650803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.650821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.650844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.650864 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: E1008 06:35:24.666126 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:24Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.671056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.671107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.671125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.671147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.671165 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: E1008 06:35:24.689064 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:24Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:24 crc kubenswrapper[4958]: E1008 06:35:24.689387 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.691338 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.691397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.691415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.691441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.691459 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.795326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.795415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.795443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.795477 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.795499 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.898541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.898602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.898620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.898644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:24 crc kubenswrapper[4958]: I1008 06:35:24.898661 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:24Z","lastTransitionTime":"2025-10-08T06:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.001970 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.002040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.002059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.002086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.002107 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.075287 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/0.log" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.075357 4958 generic.go:334] "Generic (PLEG): container finished" podID="0718b244-4835-4551-9013-6b3741845bb4" containerID="de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819" exitCode=1 Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.075401 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerDied","Data":"de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.075925 4958 scope.go:117] "RemoveContainer" containerID="de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.093137 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.105821 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.106170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.106253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.106264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.106277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.106379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.121771 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.136985 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.155406 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.170646 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.193290 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.209338 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.210120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.210156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.210167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.210185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.210195 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.229329 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.244183 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.266274 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.279319 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.300418 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.314099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.314138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.314156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.314180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.314199 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.317863 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.334423 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.349495 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.364930 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.386344 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:25Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.416882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.416920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.416932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.416968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.416983 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.521390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.521803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.522017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.522181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.522357 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.577159 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.577270 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.577507 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:25 crc kubenswrapper[4958]: E1008 06:35:25.577499 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.577544 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:25 crc kubenswrapper[4958]: E1008 06:35:25.577669 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:25 crc kubenswrapper[4958]: E1008 06:35:25.577718 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:25 crc kubenswrapper[4958]: E1008 06:35:25.577800 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.625311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.625373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.625390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.625416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.625433 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.728248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.728307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.728325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.728350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.728368 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.831133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.831453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.831531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.831626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.831719 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.934206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.934248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.934259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.934277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:25 crc kubenswrapper[4958]: I1008 06:35:25.934292 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:25Z","lastTransitionTime":"2025-10-08T06:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.037405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.037455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.037467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.037485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.037500 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.080419 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/0.log" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.080483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerStarted","Data":"d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.095431 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.116992 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.130988 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.141259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.141323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.141333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.141349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.141357 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.141381 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.151159 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.166877 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.184520 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.197636 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.225705 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.240847 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.243214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.243250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.243261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.243277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.243290 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.275445 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.291789 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.304823 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.317926 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.333505 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.345384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.345460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.345472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.345489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.345500 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.347507 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.363270 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.376172 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:26Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.447768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.447825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.447839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.447857 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.447869 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.550211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.550304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.550321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.550345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.550363 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.652858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.652940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.653008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.653042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.653063 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.755766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.755829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.755849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.755873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.755890 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.858355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.858395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.858404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.858421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.858433 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.960009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.960048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.960060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.960074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:26 crc kubenswrapper[4958]: I1008 06:35:26.960084 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:26Z","lastTransitionTime":"2025-10-08T06:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.062763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.063539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.063673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.063828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.063975 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.166043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.166090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.166103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.166123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.166138 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.268435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.268466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.268474 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.268485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.268494 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.371256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.371291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.371299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.371310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.371320 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.474001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.474069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.474122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.474151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.474173 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.575632 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.575664 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:27 crc kubenswrapper[4958]: E1008 06:35:27.575763 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.575782 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.575642 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:27 crc kubenswrapper[4958]: E1008 06:35:27.575856 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:27 crc kubenswrapper[4958]: E1008 06:35:27.575921 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:27 crc kubenswrapper[4958]: E1008 06:35:27.575983 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.576370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.576392 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.576403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.576417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.576428 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.593253 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.610569 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.629536 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.651350 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.667645 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.679535 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.681794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.681823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.681834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.681849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.681858 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.713142 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.730515 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.745199 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.760725 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.774212 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.788675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.788753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.788775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.788805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.788840 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.796543 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.806732 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.818916 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.835208 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.846350 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.855883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.866504 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:27Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.892023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.892111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.892123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.892149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.892161 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.994543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.994608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.994622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.994643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:27 crc kubenswrapper[4958]: I1008 06:35:27.994659 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:27Z","lastTransitionTime":"2025-10-08T06:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.118091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.118151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.118172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.118201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.118221 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.221045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.221092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.221112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.221138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.221156 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.324267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.324318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.324330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.324349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.324363 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.427418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.427453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.427463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.427479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.427489 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.529560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.529595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.529604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.529620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.529630 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.633078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.633113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.633168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.633185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.633194 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.735571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.735898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.736058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.736246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.736385 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.840556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.840602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.840611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.840627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.840637 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.943804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.944098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.944247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.944422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:28 crc kubenswrapper[4958]: I1008 06:35:28.944588 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:28Z","lastTransitionTime":"2025-10-08T06:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.047504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.047555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.047576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.047603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.047628 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.150652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.150705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.150721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.150746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.150762 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.253375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.253420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.253435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.253457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.253473 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.356890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.356935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.356968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.356987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.356998 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.461225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.461604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.461748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.461890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.462074 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.566209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.566278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.566290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.566308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.566325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.575539 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.575577 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:29 crc kubenswrapper[4958]: E1008 06:35:29.575711 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.575736 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:29 crc kubenswrapper[4958]: E1008 06:35:29.575861 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:29 crc kubenswrapper[4958]: E1008 06:35:29.576000 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.576109 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:29 crc kubenswrapper[4958]: E1008 06:35:29.576363 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.669624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.669665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.669679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.669697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.669708 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.773644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.773705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.773724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.773749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.773768 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.875920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.875995 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.876010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.876030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.876042 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.978810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.978875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.978892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.978922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:29 crc kubenswrapper[4958]: I1008 06:35:29.978938 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:29Z","lastTransitionTime":"2025-10-08T06:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.082249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.082317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.082336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.082364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.082382 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.185802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.185861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.185879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.185906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.185927 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.289482 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.289539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.289558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.289581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.289598 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.392552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.392602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.392619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.392642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.392660 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.495110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.495158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.495176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.495198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.495215 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.597466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.597515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.597539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.597565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.597585 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.700794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.700867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.700885 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.700916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.700940 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.803444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.803508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.803524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.803549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.803566 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.906403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.906472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.906489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.906513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:30 crc kubenswrapper[4958]: I1008 06:35:30.906532 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:30Z","lastTransitionTime":"2025-10-08T06:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.010115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.010168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.010179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.010201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.010214 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.113406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.113458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.113472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.113491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.113503 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.216732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.216807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.216832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.216866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.216889 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.319719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.319760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.319769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.319783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.319792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.422441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.422498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.422514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.422536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.422551 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.525595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.525657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.525682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.525706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.525723 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.576077 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.576123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.576151 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.576082 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:31 crc kubenswrapper[4958]: E1008 06:35:31.576189 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:31 crc kubenswrapper[4958]: E1008 06:35:31.576251 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:31 crc kubenswrapper[4958]: E1008 06:35:31.576384 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:31 crc kubenswrapper[4958]: E1008 06:35:31.576539 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.628389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.628499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.628527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.628561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.628585 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.731524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.731611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.731671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.731706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.731730 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.838261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.838341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.838368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.838397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.838419 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.941356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.941433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.941456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.941485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:31 crc kubenswrapper[4958]: I1008 06:35:31.941507 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:31Z","lastTransitionTime":"2025-10-08T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.045831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.045928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.045990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.046029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.046068 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.149520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.149581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.149599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.149622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.149639 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.253053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.253109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.253126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.253149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.253167 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.356019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.356065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.356081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.356103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.356119 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.458179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.458230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.458247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.458269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.458285 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.561830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.561890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.561908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.561933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.561987 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.665560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.665624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.665648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.665677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.665700 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.769321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.769384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.769401 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.769428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.769446 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.872643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.872692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.872703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.872720 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.872732 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.975447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.975520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.975543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.975578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:32 crc kubenswrapper[4958]: I1008 06:35:32.975598 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:32Z","lastTransitionTime":"2025-10-08T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.079307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.079360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.079376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.079399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.079416 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.181996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.182071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.182093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.182123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.182145 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.284354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.284425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.284448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.284471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.284487 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.387047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.387166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.387188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.387217 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.387237 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.490370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.490440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.490459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.490485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.490506 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.575651 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.575721 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.575721 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:33 crc kubenswrapper[4958]: E1008 06:35:33.575874 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.576052 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:33 crc kubenswrapper[4958]: E1008 06:35:33.576234 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:33 crc kubenswrapper[4958]: E1008 06:35:33.576308 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:33 crc kubenswrapper[4958]: E1008 06:35:33.577077 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.577543 4958 scope.go:117] "RemoveContainer" containerID="5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.593769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.593825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.593848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.593873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.593893 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.697256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.697311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.697329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.697354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.697372 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.800525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.800651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.800728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.800760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.800784 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.904829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.904932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.904982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.905012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:33 crc kubenswrapper[4958]: I1008 06:35:33.905036 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:33Z","lastTransitionTime":"2025-10-08T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.007367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.007435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.007458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.007484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.007505 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.110475 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.110516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.110527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.110543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.110555 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.121124 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/2.log" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.124262 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.124874 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.140283 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.159006 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.179744 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.198856 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.208923 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.212564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.212605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.212617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.212634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.212646 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.224490 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.240582 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.252735 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.264972 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.282736 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.299566 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.313381 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.315096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.315158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.315176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.315203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.315220 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.330221 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.348522 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.361800 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.380103 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.399718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.411348 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.417873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.417984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.418010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.418037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.418085 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.520454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.520522 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.520540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.520565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.520583 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.623484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.623571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.623597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.623627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.623655 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.727068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.727124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.727141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.727173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.727190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.830166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.830238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.830261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.830324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.830348 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.893831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.893887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.893904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.893926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.893970 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: E1008 06:35:34.911537 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.916483 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.916544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.916567 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.916596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.916618 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: E1008 06:35:34.934843 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.939371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.939416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.939425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.939442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.939452 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: E1008 06:35:34.958737 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.964148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.964203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.964220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.964243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.964264 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:34 crc kubenswrapper[4958]: E1008 06:35:34.979269 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.984677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.984744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.984763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.984793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:34 crc kubenswrapper[4958]: I1008 06:35:34.984811 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:34Z","lastTransitionTime":"2025-10-08T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.004033 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.004265 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.006665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.006712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.006729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.006751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.006767 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.110311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.110372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.110391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.110416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.110467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.130924 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/3.log" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.132037 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/2.log" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.136395 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" exitCode=1 Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.136444 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.136544 4958 scope.go:117] "RemoveContainer" containerID="5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.137741 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.138055 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.157840 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.179216 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.198727 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.213494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.213854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.213885 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.213931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.213977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.215475 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.233050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.252392 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.271836 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.289431 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.315431 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.317784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.317855 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.317875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.317899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.317920 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.335436 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.352195 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.383929 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.408400 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.421014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.421080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.421109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.421191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.421219 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.428507 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.448616 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.469648 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.500088 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b1286e48e518d855cbce48310a1adda3decff9002cfa7abce38e66d503d66ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:03Z\\\",\\\"message\\\":\\\"nished syncing service openshift on namespace default for network=default : 16.551µs\\\\nI1008 06:35:03.290123 6597 services_controller.go:356] Processing sync for service openshift-image-registry/image-registry-operator for network=default\\\\nI1008 06:35:03.290127 6597 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1008 06:35:03.290130 6597 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1008 06:35:03.290134 6597 services_controller.go:360] Finished syncing service image-registry-operator on namespace openshift-image-registry for network=default : 11.3µs\\\\nI1008 06:35:03.290155 6597 services_controller.go:356] Processing sync for service openshift-network-diagnostics/network-check-source for network=default\\\\nI1008 06:35:03.290165 6597 services_controller.go:360] Finished syncing service network-check-source on namespace openshift-network-diagnostics for network=default : 9.97µs\\\\nI1008 06:35:03.290171 6597 ovnkube.go:599] Stopped ovnkube\\\\nI1008 06:35:03.290190 6597 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.234297469 seconds. No OVN measurement.\\\\nI1008 06:35:03.290194 6597 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1008 06:35:03.290287 6597 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"t network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z]\\\\nI1008 06:35:34.616852 6946 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI1008 06:35:34.616851 6946 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.106332005 seconds. No OVN measurement.\\\\nI1008 06:35:34.616867 6946 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.516769 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:35Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.524679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.524941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.525163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.525378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.525570 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.575781 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.576136 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.576463 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.576657 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.576696 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.577071 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.577916 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:35 crc kubenswrapper[4958]: E1008 06:35:35.578205 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.629836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.629906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.629925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.629982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.630002 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.732180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.732557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.732702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.733001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.733185 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.836400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.836697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.836830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.837058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.837211 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.940372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.940420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.940433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.940448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:35 crc kubenswrapper[4958]: I1008 06:35:35.940460 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:35Z","lastTransitionTime":"2025-10-08T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.043115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.043437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.043657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.043861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.044101 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.148065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.148127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.148147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.148172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.148191 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.148809 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/3.log" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.154488 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:35:36 crc kubenswrapper[4958]: E1008 06:35:36.154794 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.173297 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.192244 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.212822 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.242873 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"t network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z]\\\\nI1008 06:35:34.616852 6946 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI1008 06:35:34.616851 6946 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.106332005 seconds. No OVN measurement.\\\\nI1008 06:35:34.616867 6946 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.251199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.251281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.251303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.251330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.251348 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.261637 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.296050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.319459 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.338413 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.354411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.354451 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.354465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.354485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.354498 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.354727 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.372339 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.393423 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.415120 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.436219 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.456559 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.457586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.457819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.458015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.458169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.458291 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.483144 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.499499 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.514218 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.534117 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:36Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.561133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.561349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.561457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.561585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.561689 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.590189 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.664578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.664629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.664641 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.664664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.664677 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.767013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.767093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.767119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.767149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.767169 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.870299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.870374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.870398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.870423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.870440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.974039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.974112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.974134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.974205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:36 crc kubenswrapper[4958]: I1008 06:35:36.974255 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:36Z","lastTransitionTime":"2025-10-08T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.077790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.078192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.078210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.078231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.078252 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.180993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.181057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.181078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.181104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.181121 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.283777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.283818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.283829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.283845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.283873 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.386865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.386934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.386988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.387016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.387034 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.490893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.490993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.491012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.491036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.491054 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.575587 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.575672 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.575684 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.575706 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:37 crc kubenswrapper[4958]: E1008 06:35:37.575863 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:37 crc kubenswrapper[4958]: E1008 06:35:37.576007 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:37 crc kubenswrapper[4958]: E1008 06:35:37.576137 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:37 crc kubenswrapper[4958]: E1008 06:35:37.576313 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.593831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.593887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.593904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.593927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.593975 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.596720 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.621118 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.681064 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.696911 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.697684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.697751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.697777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.697805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.697826 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.712077 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d1f2853-b182-41c0-80a0-b74355e55656\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f9d29d167616433e4a0ae16e679fe5f93cbb3e05a7d49c9048b17d124133896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.726026 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.743128 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.761014 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.780077 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.801294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.801499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.801605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.801684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.801710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.804017 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"t network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z]\\\\nI1008 06:35:34.616852 6946 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI1008 06:35:34.616851 6946 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.106332005 seconds. No OVN measurement.\\\\nI1008 06:35:34.616867 6946 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.820489 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.853066 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.873229 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.888341 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.901731 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.905160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.905233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.905251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.905276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.905297 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:37Z","lastTransitionTime":"2025-10-08T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.918777 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.937981 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.957847 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:37 crc kubenswrapper[4958]: I1008 06:35:37.975096 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:37Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.008765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.008872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.008966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.008998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.009019 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.112319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.112390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.112409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.112434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.112453 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.214532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.214599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.214616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.214642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.214660 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.317688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.317750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.317766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.317790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.317807 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.421109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.421217 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.421236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.421289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.421308 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.525313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.525393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.525414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.525446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.525468 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.628402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.628450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.628466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.628491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.628508 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.731520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.731595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.731620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.731650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.731670 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.834801 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.834847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.834865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.834887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.834903 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.938427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.938495 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.938517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.938547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:38 crc kubenswrapper[4958]: I1008 06:35:38.938569 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:38Z","lastTransitionTime":"2025-10-08T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.042190 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.042253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.042273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.042298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.042316 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.145496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.145561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.145581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.145606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.145623 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.249104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.249169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.249186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.249212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.249229 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.352070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.352144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.352165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.352191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.352208 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.455808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.455889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.455914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.455943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.455991 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.558673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.558734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.558752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.558776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.558793 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.576545 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.576579 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.576639 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.576702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:39 crc kubenswrapper[4958]: E1008 06:35:39.577147 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:39 crc kubenswrapper[4958]: E1008 06:35:39.577449 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:39 crc kubenswrapper[4958]: E1008 06:35:39.577588 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:39 crc kubenswrapper[4958]: E1008 06:35:39.577758 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.662512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.662580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.662597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.662623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.662639 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.765833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.765893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.765915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.765987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.766015 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.868875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.868979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.868997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.869043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.869060 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.971432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.971488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.971507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.971530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:39 crc kubenswrapper[4958]: I1008 06:35:39.971546 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:39Z","lastTransitionTime":"2025-10-08T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.074873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.074937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.075002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.075027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.075045 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.177601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.177676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.177700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.177730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.177754 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.280326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.280423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.280441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.280466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.280483 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.384048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.384112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.384132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.384158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.384175 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.487129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.487211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.487234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.487264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.487286 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.590089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.590145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.590171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.590216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.590237 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.692693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.693038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.693225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.693358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.693530 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.797192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.797257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.797274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.797301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.797321 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.900694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.900765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.900785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.900812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:40 crc kubenswrapper[4958]: I1008 06:35:40.900829 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:40Z","lastTransitionTime":"2025-10-08T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.004063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.004130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.004180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.004211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.004234 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.106223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.106282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.106298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.106324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.106340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.209561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.209661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.209684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.209774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.209794 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.312411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.312833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.312872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.312899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.312915 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.390447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.390589 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.390693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.390763 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.390721935 +0000 UTC m=+148.520414576 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.390832 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.390880 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.390911 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.390913 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.390934 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391053 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.391025833 +0000 UTC m=+148.520718504 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391160 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391183 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391204 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391263 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.391247609 +0000 UTC m=+148.520940250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391322 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391363 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.391349432 +0000 UTC m=+148.521042073 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391442 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.391482 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.391469435 +0000 UTC m=+148.521162076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.415809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.415863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.415880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.415905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.415922 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.518497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.518571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.518595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.518629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.518655 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.576286 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.576860 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.576866 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.576976 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.577006 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.577185 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.577276 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:41 crc kubenswrapper[4958]: E1008 06:35:41.577709 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.626138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.626489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.627167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.627228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.627247 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.729788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.729856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.729873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.729898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.729915 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.833011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.833149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.833173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.833205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.833228 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.936140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.936201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.936218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.936243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:41 crc kubenswrapper[4958]: I1008 06:35:41.936261 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:41Z","lastTransitionTime":"2025-10-08T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.039563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.039646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.039670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.039698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.039717 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.143192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.143261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.143284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.143316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.143340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.246608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.246702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.246722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.246747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.246763 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.349494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.349553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.349572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.349595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.349612 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.453002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.453383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.453581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.453747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.453916 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.557192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.557260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.557278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.557305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.557326 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.660128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.660446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.660631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.660760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.660877 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.764775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.765351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.765628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.765898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.766202 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.869347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.869714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.869862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.870031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.870178 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.972829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.972905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.972923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.972976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:42 crc kubenswrapper[4958]: I1008 06:35:42.972995 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:42Z","lastTransitionTime":"2025-10-08T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.075886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.076008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.076038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.076068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.076091 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.179033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.179124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.179142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.179165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.179183 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.282532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.282921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.283152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.283287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.283407 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.386068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.386134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.386151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.386174 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.386191 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.489307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.489385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.489409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.489440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.489462 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.576047 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.576088 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.576186 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:43 crc kubenswrapper[4958]: E1008 06:35:43.576373 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.577132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:43 crc kubenswrapper[4958]: E1008 06:35:43.577430 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:43 crc kubenswrapper[4958]: E1008 06:35:43.577634 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:43 crc kubenswrapper[4958]: E1008 06:35:43.577740 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.592552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.592617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.592640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.592670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.592695 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.696068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.696142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.696166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.696200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.696225 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.799485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.799552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.799570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.799597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.799619 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.903064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.903124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.903174 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.903202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:43 crc kubenswrapper[4958]: I1008 06:35:43.903222 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:43Z","lastTransitionTime":"2025-10-08T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.006245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.006311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.006333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.006363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.006387 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.110281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.110371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.110424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.110454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.110469 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.214326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.214383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.214401 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.214424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.214441 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.318178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.318253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.318275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.318305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.318328 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.421395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.421453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.421471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.421493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.421509 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.525078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.525125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.525139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.525157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.525168 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.669903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.670013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.670032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.670060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.670077 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.773162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.773221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.773237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.773259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.773276 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.876771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.876836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.876851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.876884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.876900 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.980471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.980538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.980556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.980581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:44 crc kubenswrapper[4958]: I1008 06:35:44.980600 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:44Z","lastTransitionTime":"2025-10-08T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.084895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.084975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.084993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.085018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.085036 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.188845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.188904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.188923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.188974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.188994 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.237640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.237721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.237747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.237776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.237798 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.268507 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.278067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.278127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.278144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.278170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.278188 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.316459 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.321608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.321657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.321673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.321696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.321715 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.337077 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.342071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.342149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.342168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.342195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.342216 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.359157 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.364380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.364448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.364466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.364495 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.364519 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.381272 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:45Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.381492 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.383453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.383520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.383539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.383565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.383587 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.487341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.487405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.487421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.487446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.487463 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.576123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.576230 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.576151 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.576329 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.576478 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.576504 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.576599 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:45 crc kubenswrapper[4958]: E1008 06:35:45.576762 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.589841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.589903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.589922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.589943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.589993 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.693597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.693656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.693674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.693697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.693715 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.796886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.796943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.796980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.797002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.797019 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.900049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.900125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.900152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.900182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:45 crc kubenswrapper[4958]: I1008 06:35:45.900286 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:45Z","lastTransitionTime":"2025-10-08T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.002907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.003024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.003042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.003068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.003089 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.105931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.106030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.106056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.106087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.106110 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.209070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.209134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.209152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.209177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.209194 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.312933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.313018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.313035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.313059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.313077 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.415647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.415706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.415723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.415748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.415765 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.519778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.519837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.519856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.519881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.519901 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.622605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.622667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.622684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.622713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.622736 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.726268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.726328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.726344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.726365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.726385 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.829070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.829145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.829169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.829200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.829228 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.932815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.933258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.933309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.933335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:46 crc kubenswrapper[4958]: I1008 06:35:46.933352 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:46Z","lastTransitionTime":"2025-10-08T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.036160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.036243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.036268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.036300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.036325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.139727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.139792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.139811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.139835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.139853 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.243391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.243777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.243919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.244111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.244234 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.347902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.348022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.348048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.348080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.348105 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.451769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.451833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.451852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.451877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.451896 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.555481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.555546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.555565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.555592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.555610 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.576157 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.576291 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:47 crc kubenswrapper[4958]: E1008 06:35:47.576354 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.576382 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.576379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:47 crc kubenswrapper[4958]: E1008 06:35:47.576520 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:47 crc kubenswrapper[4958]: E1008 06:35:47.576624 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:47 crc kubenswrapper[4958]: E1008 06:35:47.576742 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.579001 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:35:47 crc kubenswrapper[4958]: E1008 06:35:47.579262 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.593821 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.614217 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.635437 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.655987 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.659271 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.659313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.659322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.659336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.659345 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.675389 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.695737 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.721716 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.739735 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.759126 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.762088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.762146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.762163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.762188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.762205 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.776661 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d1f2853-b182-41c0-80a0-b74355e55656\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f9d29d167616433e4a0ae16e679fe5f93cbb3e05a7d49c9048b17d124133896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.797889 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.819528 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.838783 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.861697 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.865755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.865804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.865816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.865834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.865845 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.893938 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"t network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z]\\\\nI1008 06:35:34.616852 6946 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI1008 06:35:34.616851 6946 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.106332005 seconds. No OVN measurement.\\\\nI1008 06:35:34.616867 6946 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.910441 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.949050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.969655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.969780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.969792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.969829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.969845 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:47Z","lastTransitionTime":"2025-10-08T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.970852 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:47 crc kubenswrapper[4958]: I1008 06:35:47.989910 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:47Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.073486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.073553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.073573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.073599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.073615 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.175919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.176003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.176020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.176044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.176063 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.278735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.278805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.278824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.278852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.278870 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.381687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.381738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.381754 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.381777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.381794 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.485187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.485283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.485310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.485344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.485368 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.588735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.588808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.588831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.588860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.588881 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.692414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.692488 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.692504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.692529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.692546 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.795057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.795123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.795142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.795168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.795186 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.898441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.898515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.898538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.898570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:48 crc kubenswrapper[4958]: I1008 06:35:48.898593 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:48Z","lastTransitionTime":"2025-10-08T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.001899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.001998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.002015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.002042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.002061 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.104858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.104916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.104936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.104996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.105017 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.208530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.208572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.208588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.208610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.208627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.313766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.313834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.313852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.313878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.313896 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.416996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.417045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.417057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.417074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.417089 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.519479 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.519518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.519529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.519544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.519553 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.576505 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.576626 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.576644 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:49 crc kubenswrapper[4958]: E1008 06:35:49.576755 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:49 crc kubenswrapper[4958]: E1008 06:35:49.576893 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.576939 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:49 crc kubenswrapper[4958]: E1008 06:35:49.577087 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:49 crc kubenswrapper[4958]: E1008 06:35:49.577297 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.621970 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.622026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.622044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.622069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.622088 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.728692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.728769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.728789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.728814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.728831 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.831888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.831983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.832002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.832031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.832049 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.934905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.935001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.935026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.935055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:49 crc kubenswrapper[4958]: I1008 06:35:49.935072 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:49Z","lastTransitionTime":"2025-10-08T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.037762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.037828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.037846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.037869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.037885 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.140627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.140684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.140700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.140725 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.140743 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.244279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.244328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.244345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.244367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.244384 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.347409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.347461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.347478 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.347502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.347521 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.451498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.451565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.451584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.451609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.451627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.555025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.555093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.555114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.555139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.555157 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.658674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.658763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.658786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.658816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.658836 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.762363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.762447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.762468 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.762494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.762511 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.865691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.865764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.865785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.865819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.865839 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.969289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.969348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.969365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.969390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:50 crc kubenswrapper[4958]: I1008 06:35:50.969409 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:50Z","lastTransitionTime":"2025-10-08T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.072812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.072858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.072872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.072889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.072902 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.175759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.175821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.175840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.175869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.175891 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.278686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.278750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.278768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.278796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.278815 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.382207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.382261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.382278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.382299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.382315 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.486616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.486685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.486711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.486739 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.486759 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.575712 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.575773 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.575783 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:51 crc kubenswrapper[4958]: E1008 06:35:51.575881 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.575922 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:51 crc kubenswrapper[4958]: E1008 06:35:51.576091 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:51 crc kubenswrapper[4958]: E1008 06:35:51.576269 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:51 crc kubenswrapper[4958]: E1008 06:35:51.576333 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.589687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.589750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.589767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.589790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.589807 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.693534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.693582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.693599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.693623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.693639 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.796166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.796227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.796248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.796275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.796296 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.898760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.898830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.898854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.898884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:51 crc kubenswrapper[4958]: I1008 06:35:51.898905 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:51Z","lastTransitionTime":"2025-10-08T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.001773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.001871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.001896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.001924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.001997 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.106035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.106113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.106136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.106163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.106186 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.209081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.209312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.209343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.209423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.209445 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.313052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.313139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.313162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.313195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.313220 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.416869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.417002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.417029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.417060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.417082 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.520117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.520183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.520200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.520225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.520246 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.622837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.622910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.622934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.622998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.623023 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.726438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.726481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.726497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.726520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.726536 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.828932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.829024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.829047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.829076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.829097 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.932717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.932796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.932820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.932851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:52 crc kubenswrapper[4958]: I1008 06:35:52.932868 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:52Z","lastTransitionTime":"2025-10-08T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.035163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.035225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.035244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.035271 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.035287 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.138651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.138710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.138731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.138755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.138771 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.241609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.241669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.241685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.241709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.241727 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.343536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.343587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.343604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.343625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.343642 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.446458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.446526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.446543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.446568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.446586 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.548769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.548810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.548822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.548838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.548851 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.575505 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.575572 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:53 crc kubenswrapper[4958]: E1008 06:35:53.575736 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.575807 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.575832 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:53 crc kubenswrapper[4958]: E1008 06:35:53.576025 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:53 crc kubenswrapper[4958]: E1008 06:35:53.576269 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:53 crc kubenswrapper[4958]: E1008 06:35:53.576423 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.676588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.676675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.676697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.676728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.676749 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.779317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.779352 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.779360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.779373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.779382 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.882481 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.882535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.882551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.882573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.882590 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.986150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.986222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.986260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.986291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:53 crc kubenswrapper[4958]: I1008 06:35:53.986314 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:53Z","lastTransitionTime":"2025-10-08T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.088778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.088836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.088859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.088888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.088911 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.192423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.192503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.192520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.192543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.192593 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.295666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.295714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.295730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.295753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.295770 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.399094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.399150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.399167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.399192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.399210 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.502619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.502703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.502721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.502744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.502760 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.606070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.606152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.606182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.606209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.606229 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.709083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.709148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.709165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.709193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.709210 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.812877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.812974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.812992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.813016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.813034 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.915937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.916017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.916034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.916055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:54 crc kubenswrapper[4958]: I1008 06:35:54.916071 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:54Z","lastTransitionTime":"2025-10-08T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.019225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.019292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.019315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.019340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.019361 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.121920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.122024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.122046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.122078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.122098 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.224627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.224690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.224715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.224746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.224768 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.328895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.328982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.329003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.329032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.329048 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.401458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.401532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.401550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.401575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.401592 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.423056 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:55Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.428519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.428588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.428606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.428630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.428649 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.446870 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:55Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.451683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.451730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.451742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.451763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.451778 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.469857 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:55Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.475128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.475175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.475188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.475205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.475218 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.492716 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:55Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.493538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.493697 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.493772 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs podName:3776a5a1-bd0d-42af-9226-7251ee6b8788 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:59.493752169 +0000 UTC m=+162.623444780 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs") pod "network-metrics-daemon-xbfbp" (UID: "3776a5a1-bd0d-42af-9226-7251ee6b8788") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.499294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.499368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.499387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.499413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.499429 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.521367 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a960fe83-15f0-406b-ba19-f1536e6d71a9\\\",\\\"systemUUID\\\":\\\"0c77a4e0-051d-4f88-8d47-c213d2d11d87\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:55Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.521590 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.523614 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.523668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.523688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.523712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.523729 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.576315 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.576368 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.576468 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.576500 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.576592 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.576683 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.577315 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:55 crc kubenswrapper[4958]: E1008 06:35:55.577479 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.626040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.626114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.626135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.626160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.626205 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.729247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.729323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.729346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.729413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.729437 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.833212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.833296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.833323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.833355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.833382 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.936450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.936519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.936536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.936565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:55 crc kubenswrapper[4958]: I1008 06:35:55.936585 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:55Z","lastTransitionTime":"2025-10-08T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.039471 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.039567 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.039591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.039633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.039663 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.143255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.143344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.143369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.143405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.143427 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.247074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.247144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.247165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.247193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.247214 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.350325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.350382 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.350399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.350427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.350445 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.453978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.454019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.454028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.454045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.454056 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.556918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.557013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.557032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.557059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.557081 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.660308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.660379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.660405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.660435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.660457 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.764136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.764194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.764212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.764237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.764255 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.867022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.867090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.867111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.867142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.867164 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.969718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.969765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.969782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.969802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:56 crc kubenswrapper[4958]: I1008 06:35:56.969817 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:56Z","lastTransitionTime":"2025-10-08T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.072728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.072799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.072821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.072850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.072874 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.175873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.175978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.176003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.176031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.176053 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.278900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.279012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.279035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.279067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.279091 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.382535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.382628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.382647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.382671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.382689 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.486168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.486233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.486250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.486275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.486319 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.576201 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.576331 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:57 crc kubenswrapper[4958]: E1008 06:35:57.576586 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.576621 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:57 crc kubenswrapper[4958]: E1008 06:35:57.576727 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.576637 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:57 crc kubenswrapper[4958]: E1008 06:35:57.576782 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:57 crc kubenswrapper[4958]: E1008 06:35:57.576884 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.589621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.589678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.589695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.589717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.589734 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.594459 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lvkj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db7be8dd-af57-4e4c-bd7a-333a42b796bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://173b1cc060534a192c68ada7d95ab9b165a013fc7fdfb9806e25cfab332005db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9twqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lvkj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.626767 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0462d99-5d7e-4730-b42b-8bb3913ace10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e9cc1dfdbdf84903d1ce36d18e8ed15dc1774d3e36836973f2162e760f09f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13249713db81d7924f6c805d3991883b11eafb06d2613522e51427524d9b8f0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af51df5c09afcac76128205e849372fd11397cc6d3c3a315e904beb153f90dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e16021449973971efe4333f3644be810f500ff62d489575140c82812db9def4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1231b42adcba1c2dbb750871f269c64fb469e174281869fb90c2ca006cb2142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ae39cf898c83a450c03e817e2b85ba472aec93d141182701f5c9d9262d93c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65f539cb7a46646f1ca021774d6ed4add3a83955f592fe382e75abf4b3bcf2ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c54f5ac6feb80e9ee4c7690c74b264b1d6e9fdb3d0c74f5f3c990ea2728326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.648747 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25354db7-0d6a-456e-a579-9fc3eb413bc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d55e6ca9f64474b022f474927c2f558ed469eb55bea3a208c690bb82719f021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec753eeb37f1b400618c0d4ce021b2ca4c7d42aa894d36005672f061edd1b728\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa1e741171b6d078b4414909b302ac37c5e88fe935a072412d6dd81e39a456f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2af3d36178b9b50020392001bb06064a20ec090277e0d00ad7abc8944f41f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b1b89a1a98e422ad8fbddd850cf3081c0cf90c256a97d9485da0e9940a7e9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-08T06:34:36Z\\\",\\\"message\\\":\\\"008 06:34:36.674686 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1008 06:34:36.674690 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1008 06:34:36.674693 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1008 06:34:36.674877 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1008 06:34:36.679633 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759905260\\\\\\\\\\\\\\\" (2025-10-08 06:34:20 +0000 UTC to 2025-11-07 06:34:21 +0000 UTC (now=2025-10-08 06:34:36.67959373 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679851 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1759905271\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1759905271\\\\\\\\\\\\\\\" (2025-10-08 05:34:31 +0000 UTC to 2026-10-08 05:34:31 +0000 UTC (now=2025-10-08 06:34:36.679823356 +0000 UTC))\\\\\\\"\\\\nI1008 06:34:36.679879 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1008 06:34:36.679907 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1008 06:34:36.679970 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1008 06:34:36.679999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1008 06:34:36.680044 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3923493681/tls.crt::/tmp/serving-cert-3923493681/tls.key\\\\\\\"\\\\nI1008 06:34:36.680171 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF1008 06:34:36.681356 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00625701f1cfb63005e45b59f4944c97c9a738a33160dad899727e4cd01c50de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c8269620ceb29f8c279a620e95784ea1bc2312ee983e57a56ad055fec164d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.667124 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad858fc2-ef46-4d33-87a8-0c33b7ea97b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2979041e4b3d16f257fced2d640df556490010fc445c5a9a3dc55f85140e857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5f0eedcc5121ad6845bb3b2a014ded0bfb69bee54a478f92e12932261bd7db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a820b0e0e391451e24b2442c9a3402574d7c4f7b105eb7a993b4cf243cecb46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9ea42e47a2943a340cfde92d17a6e60123fcb7635053ad9a9d2584d5d2c2756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.686702 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.694938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.695095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.695119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.695142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.695202 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.707934 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hfzs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0718b244-4835-4551-9013-6b3741845bb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:25Z\\\",\\\"message\\\":\\\"2025-10-08T06:34:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624\\\\n2025-10-08T06:34:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b694ee8c-a387-41de-898a-5cc384846624 to /host/opt/cni/bin/\\\\n2025-10-08T06:34:40Z [verbose] multus-daemon started\\\\n2025-10-08T06:34:40Z [verbose] Readiness Indicator file check\\\\n2025-10-08T06:35:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fqskl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hfzs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.743304 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"272f74a5-c381-4909-b8a9-da60cbd17ddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-08T06:35:34Z\\\",\\\"message\\\":\\\"t network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:34Z is after 2025-08-24T17:21:41Z]\\\\nI1008 06:35:34.616852 6946 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI1008 06:35:34.616851 6946 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.106332005 seconds. No OVN measurement.\\\\nI1008 06:35:34.616867 6946 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-08T06:35:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-29trw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89qtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.761147 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2283bb2e-47bc-4d23-8637-d40d1d1d93b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4afcde7abbd27a401a9bbc280acd6babaf80c0a85b376b2737b16b239a9af862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d98c4860c283cee956e864a850295c3b658998e38790c1e7851f2f428f0c30cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af29b119e5febf810a8843f27cbd5d96d02fe2973c68dc20e79a04510119f6dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://588cdcc17e6619e1eff07957294df76822b855568b777eba58bebbbc4ed169d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.779494 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ede6a9fc3525b01c7001de8551cd45683fb56f7dd14747bdfae26d9ccd9e1488\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.797355 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4730982305df161885a84df91daf23ef2272c2aed3d18a6b7782897b561d914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.798929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.798984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.798996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.799014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.799028 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.811111 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-h5npb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1fde57-77a2-4e1a-a9f3-69b24a28d9f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5960bcc4d304fef60d56e313bb0db1ff037d30ea31337c8a46afd42e7b539a66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-h5npb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.827084 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9e6284b-565d-4277-9ebf-62d3623b249b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ddf998df6408caed4d5ddcab6853ff38622943e6e7f0495617505fd8ed20b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-96q8j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qd84r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.841385 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3776a5a1-bd0d-42af-9226-7251ee6b8788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ds25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbfbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.857177 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d1f2853-b182-41c0-80a0-b74355e55656\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f9d29d167616433e4a0ae16e679fe5f93cbb3e05a7d49c9048b17d124133896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02b2c5ba960872b51e64be0592f3eda32f1badee098d560a44bd55b03160df2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.876283 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.895024 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bdae1190e4dfcdbf7d1822721d43585093fec1d3bdeaf7fa3e1bf71d381f9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afc5e3bd0a0848377808350882860a205f705cdbf3f6d6154d83a938915b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.903186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.903225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.903233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.903247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.903256 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:57Z","lastTransitionTime":"2025-10-08T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.912458 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.934559 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-62gnw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f9eac3-839a-4b10-9668-a63915d5fe90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1705681a9670430c3c50409f1f616008ea32aff7d04cb7db99b7a9e4467cfc77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86d258aa76a0768b1f6b996cd75fe441f9b7513a0e8ee9625ae4129b94f29cf5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7826ddb726491c0b2ebc2a9ee579008334e33b805b8f7edc65c25718f34462\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ce5749b26c822f8f3e8aa7caf99d839f118e9fae066c041d3a752232ec63af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf3464782d8142a24384869d629673195977d2534e7c2ea8de4ec562ed5e1ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e283337c642faf2052a36be8b763ac5942accbb54c83b4ddd371319282009292\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d275a17ab25ef817aaa9d29f3791abf2aa8cf8888139adebd2e3a43b9589a5d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-08T06:34:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-08T06:34:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtddh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:37Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-62gnw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:57 crc kubenswrapper[4958]: I1008 06:35:57.947433 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82c7807-9153-4a51-a7c5-b83991a177e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-08T06:34:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65e9eeac82da0ca322283f30ea9fefdca191ef035f5ce4afd9cdb36fcd1c90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e461ed5a2c78c912f54f5633601208afe4a689f389339e49c03a0e31ed881c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-08T06:34:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fgbvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-08T06:34:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xrpdr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-08T06:35:57Z is after 2025-08-24T17:21:41Z" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.006010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.006080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.006104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.006133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.006154 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.109170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.110108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.110158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.110343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.110440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.214565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.214613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.214631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.214655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.214672 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.318617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.318680 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.318697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.318724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.319920 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.422599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.422647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.422664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.422687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.422703 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.525282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.525342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.525359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.525385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.525403 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.629424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.629487 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.629510 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.629542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.629565 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.732463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.732526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.732546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.732574 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.732594 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.835463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.835526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.835544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.835570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.835587 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.937685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.937742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.937763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.937791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:58 crc kubenswrapper[4958]: I1008 06:35:58.937808 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:58Z","lastTransitionTime":"2025-10-08T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.040581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.040648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.040667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.040690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.040706 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.143690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.143758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.143782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.143811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.143835 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.247291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.247347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.247360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.247378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.247390 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.350582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.350682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.350705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.350735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.350757 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.454277 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.454347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.454366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.454399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.454422 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.557739 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.557817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.557842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.557870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.557893 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.576276 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.576422 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:35:59 crc kubenswrapper[4958]: E1008 06:35:59.576492 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.576560 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:35:59 crc kubenswrapper[4958]: E1008 06:35:59.576737 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.576846 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:35:59 crc kubenswrapper[4958]: E1008 06:35:59.576921 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:35:59 crc kubenswrapper[4958]: E1008 06:35:59.577185 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.660484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.660546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.660555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.660572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.660581 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.764386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.764443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.764460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.764485 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.764502 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.868048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.868105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.868123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.868147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.868164 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.970734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.970809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.970833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.970865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:35:59 crc kubenswrapper[4958]: I1008 06:35:59.970887 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:35:59Z","lastTransitionTime":"2025-10-08T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.074160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.074232 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.074260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.074294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.074318 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.177505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.177565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.177585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.177609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.177626 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.280337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.280413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.280436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.280461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.280478 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.384088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.384150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.384169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.384199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.384223 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.487106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.487176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.487194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.487220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.487240 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.590824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.590891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.590909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.590934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.590986 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.694372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.694435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.694457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.694633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.694650 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.798280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.798327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.798345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.798367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.798384 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.901353 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.901429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.901446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.901473 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:00 crc kubenswrapper[4958]: I1008 06:36:00.901491 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:00Z","lastTransitionTime":"2025-10-08T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.005607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.005729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.005752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.005787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.005808 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.108504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.108565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.108583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.108612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.108630 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.211465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.211521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.211543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.211570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.211590 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.314162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.314230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.314253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.314275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.314292 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.417345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.417410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.417436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.417463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.417480 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.520054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.520106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.520122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.520143 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.520159 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.575874 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.576000 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:01 crc kubenswrapper[4958]: E1008 06:36:01.576281 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.576328 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:01 crc kubenswrapper[4958]: E1008 06:36:01.576476 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.576360 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:01 crc kubenswrapper[4958]: E1008 06:36:01.577219 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:01 crc kubenswrapper[4958]: E1008 06:36:01.578449 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.584122 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:36:01 crc kubenswrapper[4958]: E1008 06:36:01.585289 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.622875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.623112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.623288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.623559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.623735 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.726436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.726487 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.726504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.726525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.726542 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.829398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.829437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.829452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.829472 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.829487 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.932276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.932321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.932336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.932356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:01 crc kubenswrapper[4958]: I1008 06:36:01.932372 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:01Z","lastTransitionTime":"2025-10-08T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.035693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.035749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.035771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.035798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.035818 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.139266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.139327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.139351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.139377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.139398 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.242441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.242502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.242518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.242542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.242562 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.345913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.346006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.346031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.346061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.346079 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.449203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.449279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.449296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.449318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.449334 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.552333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.552388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.552404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.552426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.552443 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.655840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.655898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.655922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.655975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.655995 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.759128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.759191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.759209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.759234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.759252 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.862496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.862547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.862564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.862588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.862605 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.966033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.966090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.966106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.966128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:02 crc kubenswrapper[4958]: I1008 06:36:02.966146 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:02Z","lastTransitionTime":"2025-10-08T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.069541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.069591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.069607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.069631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.069650 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.173146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.173222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.173245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.173275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.173371 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.275709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.275764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.275782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.275808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.275826 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.380708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.380761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.380779 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.380802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.380862 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.483599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.483646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.483658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.483674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.483686 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.576319 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.576420 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:03 crc kubenswrapper[4958]: E1008 06:36:03.576503 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:03 crc kubenswrapper[4958]: E1008 06:36:03.576628 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.576345 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:03 crc kubenswrapper[4958]: E1008 06:36:03.576743 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.576835 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:03 crc kubenswrapper[4958]: E1008 06:36:03.576917 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.586666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.586711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.586722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.586738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.586750 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.689762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.689836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.689854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.690386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.690451 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.794320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.794379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.794397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.794422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.794440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.898996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.899073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.899099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.899130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:03 crc kubenswrapper[4958]: I1008 06:36:03.899153 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:03Z","lastTransitionTime":"2025-10-08T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.002502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.002568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.002579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.002620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.002632 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.105924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.106020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.106044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.106074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.106097 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.209530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.209600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.209621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.209699 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.209724 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.313110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.313163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.313175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.313194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.313209 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.418633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.418688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.418708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.418738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.418762 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.522176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.522238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.522256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.522284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.522303 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.625594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.625671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.625692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.625716 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.625739 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.729370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.729444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.729464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.729491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.729512 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.833580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.833652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.833669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.833694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.833712 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.936987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.937039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.937050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.937069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:04 crc kubenswrapper[4958]: I1008 06:36:04.937081 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:04Z","lastTransitionTime":"2025-10-08T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.039579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.039643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.039660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.039685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.039705 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.142312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.142426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.142450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.142486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.142507 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.245730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.245781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.245841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.245865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.245882 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.349069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.349111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.349120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.349134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.349145 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.452637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.452685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.452694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.452709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.452719 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.556198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.556268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.556292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.556325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.556348 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.575838 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.575911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:05 crc kubenswrapper[4958]: E1008 06:36:05.575998 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.576034 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.576032 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:05 crc kubenswrapper[4958]: E1008 06:36:05.576117 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:05 crc kubenswrapper[4958]: E1008 06:36:05.576201 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:05 crc kubenswrapper[4958]: E1008 06:36:05.576241 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.659580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.659719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.659747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.659772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.659789 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.746261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.746324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.746346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.746377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.746402 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T06:36:05Z","lastTransitionTime":"2025-10-08T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.825010 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l"] Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.825725 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.828892 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.830730 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.834462 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.836304 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.872582 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.872551375 podStartE2EDuration="1m28.872551375s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:05.872502283 +0000 UTC m=+109.002194944" watchObservedRunningTime="2025-10-08 06:36:05.872551375 +0000 UTC m=+109.002243996" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.919141 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.919249 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.919499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.919536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.919580 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.933648 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-h5npb" podStartSLOduration=89.93361593 podStartE2EDuration="1m29.93361593s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:05.933145398 +0000 UTC m=+109.062838099" watchObservedRunningTime="2025-10-08 06:36:05.93361593 +0000 UTC m=+109.063308531" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.969161 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podStartSLOduration=89.969127204 podStartE2EDuration="1m29.969127204s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:05.95073999 +0000 UTC m=+109.080432591" watchObservedRunningTime="2025-10-08 06:36:05.969127204 +0000 UTC m=+109.098819845" Oct 08 06:36:05 crc kubenswrapper[4958]: I1008 06:36:05.969881 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xrpdr" podStartSLOduration=88.969869883 podStartE2EDuration="1m28.969869883s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:05.968899278 +0000 UTC m=+109.098591889" watchObservedRunningTime="2025-10-08 06:36:05.969869883 +0000 UTC m=+109.099562524" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.001918 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.001894785 podStartE2EDuration="30.001894785s" podCreationTimestamp="2025-10-08 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.000208441 +0000 UTC m=+109.129901102" watchObservedRunningTime="2025-10-08 06:36:06.001894785 +0000 UTC m=+109.131587426" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020228 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020386 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020396 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020412 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.020599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.022204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.035724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.054367 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d8b090-7038-49ed-a000-d0f5cf4e8ce5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cnv2l\" (UID: \"56d8b090-7038-49ed-a000-d0f5cf4e8ce5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.087297 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-62gnw" podStartSLOduration=89.08727092 podStartE2EDuration="1m29.08727092s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.087093036 +0000 UTC m=+109.216785677" watchObservedRunningTime="2025-10-08 06:36:06.08727092 +0000 UTC m=+109.216963531" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.142277 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lvkj9" podStartSLOduration=90.142260786 podStartE2EDuration="1m30.142260786s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.140793997 +0000 UTC m=+109.270486608" watchObservedRunningTime="2025-10-08 06:36:06.142260786 +0000 UTC m=+109.271953397" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.153337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" Oct 08 06:36:06 crc kubenswrapper[4958]: W1008 06:36:06.174671 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d8b090_7038_49ed_a000_d0f5cf4e8ce5.slice/crio-59afd588e0b9ab0587456efde34fa24cac4e35b6ea3dfae36302e68f33419019 WatchSource:0}: Error finding container 59afd588e0b9ab0587456efde34fa24cac4e35b6ea3dfae36302e68f33419019: Status 404 returned error can't find the container with id 59afd588e0b9ab0587456efde34fa24cac4e35b6ea3dfae36302e68f33419019 Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.207302 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=85.207276635 podStartE2EDuration="1m25.207276635s" podCreationTimestamp="2025-10-08 06:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.183759667 +0000 UTC m=+109.313452278" watchObservedRunningTime="2025-10-08 06:36:06.207276635 +0000 UTC m=+109.336969246" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.207779 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.207771498 podStartE2EDuration="1m29.207771498s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.20556446 +0000 UTC m=+109.335257071" watchObservedRunningTime="2025-10-08 06:36:06.207771498 +0000 UTC m=+109.337464139" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.239351 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.239336398 podStartE2EDuration="56.239336398s" podCreationTimestamp="2025-10-08 06:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.220458592 +0000 UTC m=+109.350151203" watchObservedRunningTime="2025-10-08 06:36:06.239336398 +0000 UTC m=+109.369029009" Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.251398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" event={"ID":"56d8b090-7038-49ed-a000-d0f5cf4e8ce5","Type":"ContainerStarted","Data":"59afd588e0b9ab0587456efde34fa24cac4e35b6ea3dfae36302e68f33419019"} Oct 08 06:36:06 crc kubenswrapper[4958]: I1008 06:36:06.255092 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hfzs9" podStartSLOduration=89.255072022 podStartE2EDuration="1m29.255072022s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:06.254126807 +0000 UTC m=+109.383819418" watchObservedRunningTime="2025-10-08 06:36:06.255072022 +0000 UTC m=+109.384764633" Oct 08 06:36:07 crc kubenswrapper[4958]: I1008 06:36:07.257529 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" event={"ID":"56d8b090-7038-49ed-a000-d0f5cf4e8ce5","Type":"ContainerStarted","Data":"6ee1dfdbf6fe7499fd89790e3e66ef66067cbe7f744722afdb94a9e53dce287a"} Oct 08 06:36:07 crc kubenswrapper[4958]: I1008 06:36:07.276627 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cnv2l" podStartSLOduration=90.27660988 podStartE2EDuration="1m30.27660988s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:07.275413549 +0000 UTC m=+110.405106150" watchObservedRunningTime="2025-10-08 06:36:07.27660988 +0000 UTC m=+110.406302481" Oct 08 06:36:07 crc kubenswrapper[4958]: I1008 06:36:07.575905 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:07 crc kubenswrapper[4958]: I1008 06:36:07.575992 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:07 crc kubenswrapper[4958]: E1008 06:36:07.578102 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:07 crc kubenswrapper[4958]: I1008 06:36:07.578177 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:07 crc kubenswrapper[4958]: E1008 06:36:07.578530 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:07 crc kubenswrapper[4958]: E1008 06:36:07.578323 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:07 crc kubenswrapper[4958]: I1008 06:36:07.578177 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:07 crc kubenswrapper[4958]: E1008 06:36:07.579075 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:09 crc kubenswrapper[4958]: I1008 06:36:09.575742 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:09 crc kubenswrapper[4958]: I1008 06:36:09.575856 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:09 crc kubenswrapper[4958]: E1008 06:36:09.575921 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:09 crc kubenswrapper[4958]: I1008 06:36:09.575996 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:09 crc kubenswrapper[4958]: I1008 06:36:09.576014 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:09 crc kubenswrapper[4958]: E1008 06:36:09.576093 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:09 crc kubenswrapper[4958]: E1008 06:36:09.576238 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:09 crc kubenswrapper[4958]: E1008 06:36:09.576475 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.275095 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/1.log" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.275874 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/0.log" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.275931 4958 generic.go:334] "Generic (PLEG): container finished" podID="0718b244-4835-4551-9013-6b3741845bb4" containerID="d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580" exitCode=1 Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.276019 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerDied","Data":"d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580"} Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.276072 4958 scope.go:117] "RemoveContainer" containerID="de7ef5b521e78e8c2a6bd09081099384f3bf4e3814dcb38be8769b38c7643819" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.276707 4958 scope.go:117] "RemoveContainer" containerID="d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580" Oct 08 06:36:11 crc kubenswrapper[4958]: E1008 06:36:11.277034 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hfzs9_openshift-multus(0718b244-4835-4551-9013-6b3741845bb4)\"" pod="openshift-multus/multus-hfzs9" podUID="0718b244-4835-4551-9013-6b3741845bb4" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.576072 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.576131 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.576177 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:11 crc kubenswrapper[4958]: E1008 06:36:11.576260 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:11 crc kubenswrapper[4958]: I1008 06:36:11.576317 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:11 crc kubenswrapper[4958]: E1008 06:36:11.576520 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:11 crc kubenswrapper[4958]: E1008 06:36:11.576588 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:11 crc kubenswrapper[4958]: E1008 06:36:11.576766 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:12 crc kubenswrapper[4958]: I1008 06:36:12.282575 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/1.log" Oct 08 06:36:13 crc kubenswrapper[4958]: I1008 06:36:13.576149 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:13 crc kubenswrapper[4958]: I1008 06:36:13.576199 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:13 crc kubenswrapper[4958]: E1008 06:36:13.576281 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:13 crc kubenswrapper[4958]: I1008 06:36:13.576299 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:13 crc kubenswrapper[4958]: I1008 06:36:13.576311 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:13 crc kubenswrapper[4958]: E1008 06:36:13.576411 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:13 crc kubenswrapper[4958]: E1008 06:36:13.576497 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:13 crc kubenswrapper[4958]: E1008 06:36:13.576564 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:14 crc kubenswrapper[4958]: I1008 06:36:14.576574 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:36:14 crc kubenswrapper[4958]: E1008 06:36:14.576857 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89qtf_openshift-ovn-kubernetes(272f74a5-c381-4909-b8a9-da60cbd17ddf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" Oct 08 06:36:15 crc kubenswrapper[4958]: I1008 06:36:15.576748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:15 crc kubenswrapper[4958]: I1008 06:36:15.577088 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:15 crc kubenswrapper[4958]: I1008 06:36:15.576967 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:15 crc kubenswrapper[4958]: I1008 06:36:15.576822 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:15 crc kubenswrapper[4958]: E1008 06:36:15.577174 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:15 crc kubenswrapper[4958]: E1008 06:36:15.577243 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:15 crc kubenswrapper[4958]: E1008 06:36:15.577298 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:15 crc kubenswrapper[4958]: E1008 06:36:15.577470 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:17 crc kubenswrapper[4958]: E1008 06:36:17.571568 4958 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 08 06:36:17 crc kubenswrapper[4958]: I1008 06:36:17.576150 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:17 crc kubenswrapper[4958]: I1008 06:36:17.576205 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:17 crc kubenswrapper[4958]: I1008 06:36:17.576169 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:17 crc kubenswrapper[4958]: I1008 06:36:17.576094 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:17 crc kubenswrapper[4958]: E1008 06:36:17.578165 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:17 crc kubenswrapper[4958]: E1008 06:36:17.578237 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:17 crc kubenswrapper[4958]: E1008 06:36:17.578311 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:17 crc kubenswrapper[4958]: E1008 06:36:17.578447 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:17 crc kubenswrapper[4958]: E1008 06:36:17.691333 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 06:36:19 crc kubenswrapper[4958]: I1008 06:36:19.576104 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:19 crc kubenswrapper[4958]: I1008 06:36:19.576195 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:19 crc kubenswrapper[4958]: I1008 06:36:19.576268 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:19 crc kubenswrapper[4958]: I1008 06:36:19.576368 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:19 crc kubenswrapper[4958]: E1008 06:36:19.576475 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:19 crc kubenswrapper[4958]: E1008 06:36:19.576630 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:19 crc kubenswrapper[4958]: E1008 06:36:19.576811 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:19 crc kubenswrapper[4958]: E1008 06:36:19.576927 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:21 crc kubenswrapper[4958]: I1008 06:36:21.575870 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:21 crc kubenswrapper[4958]: I1008 06:36:21.575941 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:21 crc kubenswrapper[4958]: E1008 06:36:21.576073 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:21 crc kubenswrapper[4958]: I1008 06:36:21.576147 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:21 crc kubenswrapper[4958]: I1008 06:36:21.576176 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:21 crc kubenswrapper[4958]: E1008 06:36:21.576334 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:21 crc kubenswrapper[4958]: E1008 06:36:21.576401 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:21 crc kubenswrapper[4958]: E1008 06:36:21.576514 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:22 crc kubenswrapper[4958]: E1008 06:36:22.693288 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 06:36:23 crc kubenswrapper[4958]: I1008 06:36:23.575684 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:23 crc kubenswrapper[4958]: E1008 06:36:23.576311 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:23 crc kubenswrapper[4958]: I1008 06:36:23.576329 4958 scope.go:117] "RemoveContainer" containerID="d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580" Oct 08 06:36:23 crc kubenswrapper[4958]: I1008 06:36:23.577198 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:23 crc kubenswrapper[4958]: E1008 06:36:23.577340 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:23 crc kubenswrapper[4958]: I1008 06:36:23.577404 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:23 crc kubenswrapper[4958]: E1008 06:36:23.577481 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:23 crc kubenswrapper[4958]: I1008 06:36:23.577529 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:23 crc kubenswrapper[4958]: E1008 06:36:23.577607 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:24 crc kubenswrapper[4958]: I1008 06:36:24.336850 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/1.log" Oct 08 06:36:24 crc kubenswrapper[4958]: I1008 06:36:24.337287 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerStarted","Data":"8fab763b267bd2df242ddfebc49b15e8f24cc97f493f46bc5c8d9414631ddff5"} Oct 08 06:36:25 crc kubenswrapper[4958]: I1008 06:36:25.576380 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:25 crc kubenswrapper[4958]: I1008 06:36:25.576479 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:25 crc kubenswrapper[4958]: E1008 06:36:25.576534 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:25 crc kubenswrapper[4958]: I1008 06:36:25.576589 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:25 crc kubenswrapper[4958]: I1008 06:36:25.576585 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:25 crc kubenswrapper[4958]: E1008 06:36:25.576739 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:25 crc kubenswrapper[4958]: E1008 06:36:25.576818 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:25 crc kubenswrapper[4958]: E1008 06:36:25.576909 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:26 crc kubenswrapper[4958]: I1008 06:36:26.577187 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.352049 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/3.log" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.355464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerStarted","Data":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.356093 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.531459 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podStartSLOduration=110.531430374 podStartE2EDuration="1m50.531430374s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:27.402479803 +0000 UTC m=+130.532172494" watchObservedRunningTime="2025-10-08 06:36:27.531430374 +0000 UTC m=+130.661123015" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.533264 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xbfbp"] Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.533637 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:27 crc kubenswrapper[4958]: E1008 06:36:27.533934 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.576379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:27 crc kubenswrapper[4958]: E1008 06:36:27.576824 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.576435 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:27 crc kubenswrapper[4958]: I1008 06:36:27.576405 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:27 crc kubenswrapper[4958]: E1008 06:36:27.578078 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:27 crc kubenswrapper[4958]: E1008 06:36:27.578842 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:27 crc kubenswrapper[4958]: E1008 06:36:27.694682 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 06:36:29 crc kubenswrapper[4958]: I1008 06:36:29.575761 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:29 crc kubenswrapper[4958]: E1008 06:36:29.576258 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:29 crc kubenswrapper[4958]: I1008 06:36:29.575756 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:29 crc kubenswrapper[4958]: I1008 06:36:29.576031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:29 crc kubenswrapper[4958]: I1008 06:36:29.575929 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:29 crc kubenswrapper[4958]: E1008 06:36:29.576517 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:29 crc kubenswrapper[4958]: E1008 06:36:29.576583 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:29 crc kubenswrapper[4958]: E1008 06:36:29.576688 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:31 crc kubenswrapper[4958]: I1008 06:36:31.576401 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:31 crc kubenswrapper[4958]: I1008 06:36:31.576453 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:31 crc kubenswrapper[4958]: I1008 06:36:31.576501 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:31 crc kubenswrapper[4958]: E1008 06:36:31.576603 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 08 06:36:31 crc kubenswrapper[4958]: I1008 06:36:31.576668 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:31 crc kubenswrapper[4958]: E1008 06:36:31.576777 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 08 06:36:31 crc kubenswrapper[4958]: E1008 06:36:31.576875 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbfbp" podUID="3776a5a1-bd0d-42af-9226-7251ee6b8788" Oct 08 06:36:31 crc kubenswrapper[4958]: E1008 06:36:31.577066 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 08 06:36:32 crc kubenswrapper[4958]: I1008 06:36:32.335066 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.576194 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.576322 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.576339 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.576537 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.580856 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.580897 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.580857 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.581196 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.581519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 08 06:36:33 crc kubenswrapper[4958]: I1008 06:36:33.582047 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.557270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.613232 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qk24s"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.614310 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.617179 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-clxh5"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.617911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.620745 4958 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.620819 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.621050 4958 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.621081 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.621138 4958 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.621158 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.621414 4958 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.621443 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.621830 4958 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.621895 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.622039 4958 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.622113 4958 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.622141 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.622142 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.622215 4958 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.622261 4958 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.622272 4958 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.622217 4958 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.622301 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.622264 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.622305 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.622326 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.623303 4958 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.623353 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.627037 4958 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.627103 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.627231 4958 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.627268 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.627418 4958 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.627457 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.627573 4958 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.627610 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: W1008 06:36:36.631433 4958 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Oct 08 06:36:36 crc kubenswrapper[4958]: E1008 06:36:36.631505 4958 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.631667 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.632851 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.639586 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.642170 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.642258 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.642775 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.642929 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.645167 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.646267 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.651520 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.653045 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.653384 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.654828 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.655149 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.655358 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.655578 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.655792 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.657144 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.658022 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.658548 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.659546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.663257 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.663711 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.664148 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.664329 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.667877 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.668554 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.668795 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xkpn4"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.669539 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.670012 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.671773 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cqh7v"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.672366 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.672850 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.673073 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.673208 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.673370 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.673530 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.673781 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.677895 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p6pfq"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.681156 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.681477 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.682152 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.682245 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.682321 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.682430 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.682601 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.682752 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.684593 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.685001 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.685828 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.686469 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.686546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.686630 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.686806 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.690832 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.691265 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.691596 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7mwpk"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.692055 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mhg9b"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.692159 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.692507 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7chhl"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.692623 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.692896 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.693420 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.693473 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.694128 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.696208 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.696736 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.696929 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.697476 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.701013 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.701462 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-d22g5"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.701680 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.701985 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.702222 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.702339 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.703027 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.703649 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.712039 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wjnvk"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.714728 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.715468 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717109 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717117 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717430 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717440 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717449 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717680 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717894 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.717902 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.718165 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.727041 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.727114 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.727457 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.730093 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.731372 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.744294 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.744725 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.744979 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.745479 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.745698 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.746595 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.746766 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.746930 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.747277 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.747620 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.747785 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.747939 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.748321 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.750716 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.751395 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.751878 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-982gr"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.755161 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.759613 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.759633 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.759797 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.759799 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760025 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760049 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760193 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.759635 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760280 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760220 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760332 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760377 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.760391 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.761411 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765154 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-config\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765189 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765206 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-serving-cert\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765467 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-images\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765496 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/552f0de0-1991-4a6c-8a8c-f44fced1f07f-node-pullsecrets\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765535 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765558 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765598 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-config\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b49e7739-a4d8-4b6d-9e63-a444acd9d1d9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cqh7v\" (UID: \"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765696 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/25a337ec-7967-4941-b3b8-d78ef7e1eab1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765718 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9bzb\" (UniqueName: \"kubernetes.io/projected/19fe21c7-7d44-447e-808d-3e929bbbb3a8-kube-api-access-h9bzb\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznnw\" (UniqueName: \"kubernetes.io/projected/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-kube-api-access-sznnw\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765853 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f18c448b-c665-482d-aa20-387343546e7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g22b6\" (UID: \"f18c448b-c665-482d-aa20-387343546e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765907 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-etcd-client\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765920 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-86rhc"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765927 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvd8\" (UniqueName: \"kubernetes.io/projected/b49e7739-a4d8-4b6d-9e63-a444acd9d1d9-kube-api-access-ddvd8\") pod \"multus-admission-controller-857f4d67dd-cqh7v\" (UID: \"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.765992 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766007 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-config\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766030 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766050 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-image-import-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-auth-proxy-config\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766104 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zr5\" (UniqueName: \"kubernetes.io/projected/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-kube-api-access-67zr5\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-encryption-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766137 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-serving-cert\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766155 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-serving-cert\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766223 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766259 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-client\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766311 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b964df0d-9992-4be4-900f-68a7b74bceef-serving-cert\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766335 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvv45\" (UniqueName: \"kubernetes.io/projected/b964df0d-9992-4be4-900f-68a7b74bceef-kube-api-access-hvv45\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766361 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5g56\" (UniqueName: \"kubernetes.io/projected/25a337ec-7967-4941-b3b8-d78ef7e1eab1-kube-api-access-z5g56\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766392 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-encryption-config\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766425 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdd94\" (UniqueName: \"kubernetes.io/projected/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-kube-api-access-gdd94\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766444 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-machine-approver-tls\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766491 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a337ec-7967-4941-b3b8-d78ef7e1eab1-serving-cert\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-config\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766539 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766550 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766564 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-client-ca\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766578 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit-dir\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766593 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgdqx\" (UniqueName: \"kubernetes.io/projected/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-kube-api-access-rgdqx\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766608 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766621 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19fe21c7-7d44-447e-808d-3e929bbbb3a8-audit-dir\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766634 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-service-ca-bundle\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766656 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-audit-policies\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766671 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrm8\" (UniqueName: \"kubernetes.io/projected/f18c448b-c665-482d-aa20-387343546e7c-kube-api-access-gzrm8\") pod \"cluster-samples-operator-665b6dd947-g22b6\" (UID: \"f18c448b-c665-482d-aa20-387343546e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766687 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9sl\" (UniqueName: \"kubernetes.io/projected/552f0de0-1991-4a6c-8a8c-f44fced1f07f-kube-api-access-ch9sl\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.766911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.768381 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.771837 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-clxh5"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.771885 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.772523 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.772995 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.774005 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.775088 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.778053 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.780396 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.781250 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.787986 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wj4ql"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.804701 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.806543 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.808997 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.809192 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.811638 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fkwgq"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.811688 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.812428 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.812877 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.813039 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.814863 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.814967 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4pr7"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.817183 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.817750 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.819703 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.821885 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.822910 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.824708 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.825735 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.831988 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.832236 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.833234 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.834293 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.834770 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.836030 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qk24s"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.837303 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.838189 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.838229 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.844175 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xkpn4"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.845840 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.846295 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.847449 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5tfs7"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.848334 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xsfmh"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.848769 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.849015 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.849233 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.850632 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.852676 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cqh7v"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.852800 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-982gr"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.853615 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.854555 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wjnvk"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.855507 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.855604 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mhg9b"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.857007 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.857476 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.858482 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.860811 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7mwpk"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.861848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.863379 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p6pfq"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.864425 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.865623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.866609 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.867204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b964df0d-9992-4be4-900f-68a7b74bceef-serving-cert\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.867450 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvv45\" (UniqueName: \"kubernetes.io/projected/b964df0d-9992-4be4-900f-68a7b74bceef-kube-api-access-hvv45\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.867619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5g56\" (UniqueName: \"kubernetes.io/projected/25a337ec-7967-4941-b3b8-d78ef7e1eab1-kube-api-access-z5g56\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.867737 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.867853 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wj4ql"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.867860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-encryption-config\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868036 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfjf\" (UniqueName: \"kubernetes.io/projected/d6d60dee-41d3-4b20-99ea-da1db49f2548-kube-api-access-ftfjf\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868160 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdd94\" (UniqueName: \"kubernetes.io/projected/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-kube-api-access-gdd94\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjpkr\" (UniqueName: \"kubernetes.io/projected/95346bab-d029-433b-94ce-1de7c608d2fe-kube-api-access-zjpkr\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868412 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-machine-approver-tls\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868495 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868574 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-webhook-cert\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868655 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a337ec-7967-4941-b3b8-d78ef7e1eab1-serving-cert\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868730 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-apiservice-cert\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-config\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/843bf41d-d355-41ba-8073-4a53421c09a6-metrics-tls\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.868966 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-client-ca\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869121 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit-dir\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869196 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgdqx\" (UniqueName: \"kubernetes.io/projected/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-kube-api-access-rgdqx\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19fe21c7-7d44-447e-808d-3e929bbbb3a8-audit-dir\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-service-ca-bundle\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869496 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-audit-policies\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869565 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrm8\" (UniqueName: \"kubernetes.io/projected/f18c448b-c665-482d-aa20-387343546e7c-kube-api-access-gzrm8\") pod \"cluster-samples-operator-665b6dd947-g22b6\" (UID: \"f18c448b-c665-482d-aa20-387343546e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch9sl\" (UniqueName: \"kubernetes.io/projected/552f0de0-1991-4a6c-8a8c-f44fced1f07f-kube-api-access-ch9sl\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869731 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-config\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869798 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95346bab-d029-433b-94ce-1de7c608d2fe-images\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869876 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869967 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-serving-cert\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870051 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6d60dee-41d3-4b20-99ea-da1db49f2548-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bzh\" (UniqueName: \"kubernetes.io/projected/0381af03-9a83-4a43-ab4f-79cbd5d3351f-kube-api-access-r8bzh\") pod \"downloads-7954f5f757-d22g5\" (UID: \"0381af03-9a83-4a43-ab4f-79cbd5d3351f\") " pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870194 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sbr\" (UniqueName: \"kubernetes.io/projected/843bf41d-d355-41ba-8073-4a53421c09a6-kube-api-access-j6sbr\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870264 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870339 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-images\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870425 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/552f0de0-1991-4a6c-8a8c-f44fced1f07f-node-pullsecrets\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870482 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-config\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870534 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b49e7739-a4d8-4b6d-9e63-a444acd9d1d9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cqh7v\" (UID: \"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870571 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/25a337ec-7967-4941-b3b8-d78ef7e1eab1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-audit-policies\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9bzb\" (UniqueName: \"kubernetes.io/projected/19fe21c7-7d44-447e-808d-3e929bbbb3a8-kube-api-access-h9bzb\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870609 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznnw\" (UniqueName: \"kubernetes.io/projected/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-kube-api-access-sznnw\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869772 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19fe21c7-7d44-447e-808d-3e929bbbb3a8-audit-dir\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869741 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870634 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qfv\" (UniqueName: \"kubernetes.io/projected/fce98ad8-5ce1-4dd7-81a0-68606ba22262-kube-api-access-q5qfv\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.869375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95346bab-d029-433b-94ce-1de7c608d2fe-proxy-tls\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870731 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f18c448b-c665-482d-aa20-387343546e7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g22b6\" (UID: \"f18c448b-c665-482d-aa20-387343546e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870761 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/843bf41d-d355-41ba-8073-4a53421c09a6-trusted-ca\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95346bab-d029-433b-94ce-1de7c608d2fe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870817 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-etcd-client\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870848 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvd8\" (UniqueName: \"kubernetes.io/projected/b49e7739-a4d8-4b6d-9e63-a444acd9d1d9-kube-api-access-ddvd8\") pod \"multus-admission-controller-857f4d67dd-cqh7v\" (UID: \"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-config\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/843bf41d-d355-41ba-8073-4a53421c09a6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870920 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870969 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-image-import-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.874627 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8gp\" (UniqueName: \"kubernetes.io/projected/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-kube-api-access-5l8gp\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.871402 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-client-ca\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.870361 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-service-ca-bundle\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.874697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-auth-proxy-config\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.874720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zr5\" (UniqueName: \"kubernetes.io/projected/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-kube-api-access-67zr5\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.872447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-config\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.872836 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit-dir\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.872984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-encryption-config\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.873329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-machine-approver-tls\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.873471 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.873672 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/25a337ec-7967-4941-b3b8-d78ef7e1eab1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.873824 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b964df0d-9992-4be4-900f-68a7b74bceef-serving-cert\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.873899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fe21c7-7d44-447e-808d-3e929bbbb3a8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.874200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.874481 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-config\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.872298 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.871592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/552f0de0-1991-4a6c-8a8c-f44fced1f07f-node-pullsecrets\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.874997 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-encryption-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.872802 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-config\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875025 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-serving-cert\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a337ec-7967-4941-b3b8-d78ef7e1eab1-serving-cert\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875186 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6d60dee-41d3-4b20-99ea-da1db49f2548-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875268 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-serving-cert\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875351 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-client\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875434 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-tmpfs\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875628 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875805 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-auth-proxy-config\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.875916 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wx8p8"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.876647 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-serving-cert\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.876735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f18c448b-c665-482d-aa20-387343546e7c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g22b6\" (UID: \"f18c448b-c665-482d-aa20-387343546e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.877252 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fe21c7-7d44-447e-808d-3e929bbbb3a8-etcd-client\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.877336 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.877406 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.877982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-serving-cert\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.878062 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kqn9"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.878204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b49e7739-a4d8-4b6d-9e63-a444acd9d1d9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cqh7v\" (UID: \"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.881352 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-86rhc"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.881434 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.882098 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.883133 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7chhl"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.884124 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-d22g5"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.885102 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4pr7"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.886352 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.887347 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.888329 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.889279 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xsfmh"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.890247 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.891208 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.892431 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wx8p8"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.894278 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kqn9"] Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.896770 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.916099 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.936821 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.956645 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976095 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976168 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfjf\" (UniqueName: \"kubernetes.io/projected/d6d60dee-41d3-4b20-99ea-da1db49f2548-kube-api-access-ftfjf\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-webhook-cert\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976260 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjpkr\" (UniqueName: \"kubernetes.io/projected/95346bab-d029-433b-94ce-1de7c608d2fe-kube-api-access-zjpkr\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976296 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-apiservice-cert\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976351 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/843bf41d-d355-41ba-8073-4a53421c09a6-metrics-tls\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976437 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95346bab-d029-433b-94ce-1de7c608d2fe-images\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976482 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6d60dee-41d3-4b20-99ea-da1db49f2548-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bzh\" (UniqueName: \"kubernetes.io/projected/0381af03-9a83-4a43-ab4f-79cbd5d3351f-kube-api-access-r8bzh\") pod \"downloads-7954f5f757-d22g5\" (UID: \"0381af03-9a83-4a43-ab4f-79cbd5d3351f\") " pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976546 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6sbr\" (UniqueName: \"kubernetes.io/projected/843bf41d-d355-41ba-8073-4a53421c09a6-kube-api-access-j6sbr\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qfv\" (UniqueName: \"kubernetes.io/projected/fce98ad8-5ce1-4dd7-81a0-68606ba22262-kube-api-access-q5qfv\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976692 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95346bab-d029-433b-94ce-1de7c608d2fe-proxy-tls\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/843bf41d-d355-41ba-8073-4a53421c09a6-trusted-ca\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95346bab-d029-433b-94ce-1de7c608d2fe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976799 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/843bf41d-d355-41ba-8073-4a53421c09a6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.977240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95346bab-d029-433b-94ce-1de7c608d2fe-images\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.976851 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l8gp\" (UniqueName: \"kubernetes.io/projected/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-kube-api-access-5l8gp\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.977395 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.977658 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6d60dee-41d3-4b20-99ea-da1db49f2548-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.977699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.977745 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-tmpfs\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.978303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95346bab-d029-433b-94ce-1de7c608d2fe-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.978439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6d60dee-41d3-4b20-99ea-da1db49f2548-config\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.979686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-tmpfs\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.981263 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6d60dee-41d3-4b20-99ea-da1db49f2548-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.982100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95346bab-d029-433b-94ce-1de7c608d2fe-proxy-tls\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:36 crc kubenswrapper[4958]: I1008 06:36:36.996446 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.016678 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.037374 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.056276 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.076055 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.097287 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.117514 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.136760 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.157103 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.177370 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.198001 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.212855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/843bf41d-d355-41ba-8073-4a53421c09a6-metrics-tls\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.231000 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.237646 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.239473 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/843bf41d-d355-41ba-8073-4a53421c09a6-trusted-ca\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.277797 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.296279 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.317440 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.337146 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.357410 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.386192 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.396596 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.402117 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-apiservice-cert\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.412132 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-webhook-cert\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.417415 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.437129 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.456583 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.477744 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.496548 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.517250 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.545737 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.556447 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.576480 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.597552 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.617033 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.656261 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.677608 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.697511 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.716545 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.736476 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.757348 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.776510 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.797069 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.811934 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.815243 4958 request.go:700] Waited for 1.002125595s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.817133 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.845979 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.850623 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.856710 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.869065 4958 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.869166 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.369138023 +0000 UTC m=+141.498830654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.871225 4958 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.871307 4958 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.871321 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-machine-api-operator-tls podName:6c04fc7e-cc6d-4d08-99cc-752b31cb5110 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.371298737 +0000 UTC m=+141.500991448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-clxh5" (UID: "6c04fc7e-cc6d-4d08-99cc-752b31cb5110") : failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.871391 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-config podName:6c04fc7e-cc6d-4d08-99cc-752b31cb5110 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.37137166 +0000 UTC m=+141.501064291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-config") pod "machine-api-operator-5694c8668f-clxh5" (UID: "6c04fc7e-cc6d-4d08-99cc-752b31cb5110") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.873283 4958 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.873359 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-images podName:6c04fc7e-cc6d-4d08-99cc-752b31cb5110 nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.373339256 +0000 UTC m=+141.503031977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-images") pod "machine-api-operator-5694c8668f-clxh5" (UID: "6c04fc7e-cc6d-4d08-99cc-752b31cb5110") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.873724 4958 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.873797 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-serving-ca podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.373777013 +0000 UTC m=+141.503469654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-serving-ca") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.873847 4958 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.873890 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-trusted-ca-bundle podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.373876287 +0000 UTC m=+141.503568918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-trusted-ca-bundle") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.874532 4958 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.874597 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-config podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.374582694 +0000 UTC m=+141.504275325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-config") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.874790 4958 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.874845 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-image-import-ca podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.374831254 +0000 UTC m=+141.504523885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-image-import-ca") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync configmap cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.875327 4958 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.875410 4958 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.875547 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-serving-cert podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.37551163 +0000 UTC m=+141.505204261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-serving-cert") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.875602 4958 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.875655 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-client podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.375641795 +0000 UTC m=+141.505334426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-client") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: E1008 06:36:37.876214 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-encryption-config podName:552f0de0-1991-4a6c-8a8c-f44fced1f07f nodeName:}" failed. No retries permitted until 2025-10-08 06:36:38.376196707 +0000 UTC m=+141.505889308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-encryption-config") pod "apiserver-76f77b778f-qk24s" (UID: "552f0de0-1991-4a6c-8a8c-f44fced1f07f") : failed to sync secret cache: timed out waiting for the condition Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.876883 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.897857 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.916922 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.936613 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.957417 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.976620 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 08 06:36:37 crc kubenswrapper[4958]: I1008 06:36:37.996825 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.016984 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.036784 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.057480 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.077593 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.098056 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.117023 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.135941 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.157075 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.176369 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.197429 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.217639 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.236730 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.256743 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.277548 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.297065 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.316517 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.337216 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.357587 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.379926 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.397161 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-encryption-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400402 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-serving-cert\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400450 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-client\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400600 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400664 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-config\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400755 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400814 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-images\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400867 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.400936 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.401071 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.401113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-image-import-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.416385 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.438768 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.457431 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.476767 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.496719 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.524362 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.537120 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.589539 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvv45\" (UniqueName: \"kubernetes.io/projected/b964df0d-9992-4be4-900f-68a7b74bceef-kube-api-access-hvv45\") pod \"controller-manager-879f6c89f-xkpn4\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.604006 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5g56\" (UniqueName: \"kubernetes.io/projected/25a337ec-7967-4941-b3b8-d78ef7e1eab1-kube-api-access-z5g56\") pod \"openshift-config-operator-7777fb866f-hr4l5\" (UID: \"25a337ec-7967-4941-b3b8-d78ef7e1eab1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.621819 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdd94\" (UniqueName: \"kubernetes.io/projected/0bf6b0ee-ce56-4b87-92ea-1eb785eec586-kube-api-access-gdd94\") pod \"authentication-operator-69f744f599-p6pfq\" (UID: \"0bf6b0ee-ce56-4b87-92ea-1eb785eec586\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.623421 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.642871 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.647569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrm8\" (UniqueName: \"kubernetes.io/projected/f18c448b-c665-482d-aa20-387343546e7c-kube-api-access-gzrm8\") pod \"cluster-samples-operator-665b6dd947-g22b6\" (UID: \"f18c448b-c665-482d-aa20-387343546e7c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.678359 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgdqx\" (UniqueName: \"kubernetes.io/projected/a0dfe85b-af34-44a9-b7b2-a56cb6f192a2-kube-api-access-rgdqx\") pod \"openshift-controller-manager-operator-756b6f6bc6-qsfbc\" (UID: \"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.696883 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9bzb\" (UniqueName: \"kubernetes.io/projected/19fe21c7-7d44-447e-808d-3e929bbbb3a8-kube-api-access-h9bzb\") pod \"apiserver-7bbb656c7d-plwft\" (UID: \"19fe21c7-7d44-447e-808d-3e929bbbb3a8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.717173 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvd8\" (UniqueName: \"kubernetes.io/projected/b49e7739-a4d8-4b6d-9e63-a444acd9d1d9-kube-api-access-ddvd8\") pod \"multus-admission-controller-857f4d67dd-cqh7v\" (UID: \"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.761659 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.766777 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zr5\" (UniqueName: \"kubernetes.io/projected/4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b-kube-api-access-67zr5\") pod \"machine-approver-56656f9798-pggjp\" (UID: \"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.776286 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.781400 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.798292 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 08 06:36:38 crc kubenswrapper[4958]: W1008 06:36:38.810732 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4869cf7c_e6e5_4ecf_a14d_ca6317cebc7b.slice/crio-ad72fb0bace78c886049003f95729394f58b96c50bb3c65738dfc034ad63f6cb WatchSource:0}: Error finding container ad72fb0bace78c886049003f95729394f58b96c50bb3c65738dfc034ad63f6cb: Status 404 returned error can't find the container with id ad72fb0bace78c886049003f95729394f58b96c50bb3c65738dfc034ad63f6cb Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.816366 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.834845 4958 request.go:700] Waited for 1.953227034s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.837200 4958 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.851873 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.857734 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.859479 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p6pfq"] Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.861762 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" Oct 08 06:36:38 crc kubenswrapper[4958]: W1008 06:36:38.877685 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf6b0ee_ce56_4b87_92ea_1eb785eec586.slice/crio-2fd59212e4f06d9d7b938f15a6a754f78e65df5c8ff2cb5e95228e8e6156967f WatchSource:0}: Error finding container 2fd59212e4f06d9d7b938f15a6a754f78e65df5c8ff2cb5e95228e8e6156967f: Status 404 returned error can't find the container with id 2fd59212e4f06d9d7b938f15a6a754f78e65df5c8ff2cb5e95228e8e6156967f Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.883620 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.888993 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xkpn4"] Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.894414 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfjf\" (UniqueName: \"kubernetes.io/projected/d6d60dee-41d3-4b20-99ea-da1db49f2548-kube-api-access-ftfjf\") pod \"openshift-apiserver-operator-796bbdcf4f-c9g9c\" (UID: \"d6d60dee-41d3-4b20-99ea-da1db49f2548\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:38 crc kubenswrapper[4958]: W1008 06:36:38.908290 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb964df0d_9992_4be4_900f_68a7b74bceef.slice/crio-7137291a9d436ba7bfcb3f49a1a066865a8fa4480ad077201b08a5ae9496ff9d WatchSource:0}: Error finding container 7137291a9d436ba7bfcb3f49a1a066865a8fa4480ad077201b08a5ae9496ff9d: Status 404 returned error can't find the container with id 7137291a9d436ba7bfcb3f49a1a066865a8fa4480ad077201b08a5ae9496ff9d Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.908478 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.912757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjpkr\" (UniqueName: \"kubernetes.io/projected/95346bab-d029-433b-94ce-1de7c608d2fe-kube-api-access-zjpkr\") pod \"machine-config-operator-74547568cd-8vq2s\" (UID: \"95346bab-d029-433b-94ce-1de7c608d2fe\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.929104 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.931350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qfv\" (UniqueName: \"kubernetes.io/projected/fce98ad8-5ce1-4dd7-81a0-68606ba22262-kube-api-access-q5qfv\") pod \"marketplace-operator-79b997595-wj4ql\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.952960 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.954828 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.958699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6sbr\" (UniqueName: \"kubernetes.io/projected/843bf41d-d355-41ba-8073-4a53421c09a6-kube-api-access-j6sbr\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.970323 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bzh\" (UniqueName: \"kubernetes.io/projected/0381af03-9a83-4a43-ab4f-79cbd5d3351f-kube-api-access-r8bzh\") pod \"downloads-7954f5f757-d22g5\" (UID: \"0381af03-9a83-4a43-ab4f-79cbd5d3351f\") " pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:38 crc kubenswrapper[4958]: I1008 06:36:38.997490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/843bf41d-d355-41ba-8073-4a53421c09a6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lwm8j\" (UID: \"843bf41d-d355-41ba-8073-4a53421c09a6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.019866 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l8gp\" (UniqueName: \"kubernetes.io/projected/e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02-kube-api-access-5l8gp\") pod \"packageserver-d55dfcdfc-cw96f\" (UID: \"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.044091 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.054999 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.064379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.072189 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.073358 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-audit\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.075371 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.081977 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.083243 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.097276 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.109693 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.118319 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.138617 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.144289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-image-import-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.150104 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.157691 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.177136 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.180214 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-encryption-config\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.181364 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.189820 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch9sl\" (UniqueName: \"kubernetes.io/projected/552f0de0-1991-4a6c-8a8c-f44fced1f07f-kube-api-access-ch9sl\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212039 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e38984df-4074-4d74-9a5e-fada1d1d8b1f-serving-cert\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212121 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212137 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmg5j\" (UniqueName: \"kubernetes.io/projected/98aabd8a-9cfc-4977-b42c-990fc3894455-kube-api-access-pmg5j\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212154 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c836c0b9-6574-4e15-b137-801770985326-config\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-srv-cert\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212204 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-policies\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-oauth-config\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212236 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a63239-acb7-4fc9-8f62-ffa55f261901-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212272 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg62c\" (UniqueName: \"kubernetes.io/projected/5fd0f38b-dcb2-4039-b4b7-bb02593b1aec-kube-api-access-vg62c\") pod \"dns-operator-744455d44c-wjnvk\" (UID: \"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec\") " pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212303 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-service-ca\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527rf\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-kube-api-access-527rf\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212376 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-config\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212398 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212414 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-stats-auth\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212430 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw84p\" (UniqueName: \"kubernetes.io/projected/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-kube-api-access-nw84p\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212448 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c836c0b9-6574-4e15-b137-801770985326-trusted-ca\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212480 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a63239-acb7-4fc9-8f62-ffa55f261901-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212498 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212512 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c836c0b9-6574-4e15-b137-801770985326-serving-cert\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212534 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cdbg\" (UniqueName: \"kubernetes.io/projected/c836c0b9-6574-4e15-b137-801770985326-kube-api-access-2cdbg\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-service-ca\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212626 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-serving-cert\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212648 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212665 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-metrics-certs\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwvw\" (UniqueName: \"kubernetes.io/projected/eae96488-906d-423e-ad79-b4448ef8ad58-kube-api-access-sjwvw\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212717 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-kube-api-access-dks2m\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212733 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-config\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212751 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs98d\" (UniqueName: \"kubernetes.io/projected/eb229a90-087c-4c24-a82c-863a6b64b5b3-kube-api-access-cs98d\") pod \"migrator-59844c95c7-lhsmn\" (UID: \"eb229a90-087c-4c24-a82c-863a6b64b5b3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212772 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltnm\" (UniqueName: \"kubernetes.io/projected/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-kube-api-access-qltnm\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212788 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98aabd8a-9cfc-4977-b42c-990fc3894455-srv-cert\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212804 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4br6\" (UniqueName: \"kubernetes.io/projected/7e88be94-7d66-4265-88ec-89278ebaabb7-kube-api-access-z4br6\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212820 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-default-certificate\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212841 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212869 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-config\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-client-ca\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-certificates\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212917 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae96488-906d-423e-ad79-b4448ef8ad58-serving-cert\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz9nb\" (UniqueName: \"kubernetes.io/projected/ed1e22c8-bc14-4e16-b349-442d188ac881-kube-api-access-tz9nb\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.212981 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-config\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213034 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgww\" (UniqueName: \"kubernetes.io/projected/61823234-969c-46ce-9c8c-7eb8f41867dc-kube-api-access-4vgww\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213074 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fd0f38b-dcb2-4039-b4b7-bb02593b1aec-metrics-tls\") pod \"dns-operator-744455d44c-wjnvk\" (UID: \"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec\") " pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlbl\" (UniqueName: \"kubernetes.io/projected/e38984df-4074-4d74-9a5e-fada1d1d8b1f-kube-api-access-6dlbl\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61823234-969c-46ce-9c8c-7eb8f41867dc-service-ca-bundle\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213136 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213159 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213191 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-dir\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213211 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njg7\" (UniqueName: \"kubernetes.io/projected/582542ee-c2ad-4058-830e-dda091d4507a-kube-api-access-6njg7\") pod \"control-plane-machine-set-operator-78cbb6b69f-9dwhm\" (UID: \"582542ee-c2ad-4058-830e-dda091d4507a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213234 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98aabd8a-9cfc-4977-b42c-990fc3894455-profile-collector-cert\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213328 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-oauth-serving-cert\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213378 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e88be94-7d66-4265-88ec-89278ebaabb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-trusted-ca\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-ca\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213451 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/582542ee-c2ad-4058-830e-dda091d4507a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9dwhm\" (UID: \"582542ee-c2ad-4058-830e-dda091d4507a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e88be94-7d66-4265-88ec-89278ebaabb7-proxy-tls\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-trusted-ca-bundle\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-bound-sa-token\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213530 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213547 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-tls\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213562 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-client\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.213591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.215507 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:39.715486037 +0000 UTC m=+142.845178638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.219654 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.235645 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.237524 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.242475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.256155 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.265241 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-etcd-client\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.275803 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.296546 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.305587 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-config\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314373 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314514 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527rf\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-kube-api-access-527rf\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.314737 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:39.814721369 +0000 UTC m=+142.944413970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3e7ce87-89d1-4be8-801d-88af586408e8-certs\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314905 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblb2\" (UniqueName: \"kubernetes.io/projected/835c2c67-ae1f-49bb-9d70-09bf1ac25885-kube-api-access-xblb2\") pod \"package-server-manager-789f6589d5-kmks4\" (UID: \"835c2c67-ae1f-49bb-9d70-09bf1ac25885\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314924 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-config\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314966 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.314984 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-stats-auth\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315002 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw84p\" (UniqueName: \"kubernetes.io/projected/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-kube-api-access-nw84p\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/835c2c67-ae1f-49bb-9d70-09bf1ac25885-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kmks4\" (UID: \"835c2c67-ae1f-49bb-9d70-09bf1ac25885\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c836c0b9-6574-4e15-b137-801770985326-trusted-ca\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315071 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ecaafcc-322f-4167-8757-e6a29dcca078-signing-cabundle\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315095 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a63239-acb7-4fc9-8f62-ffa55f261901-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315112 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79b44209-418b-49d3-8a0f-cb90d8928373-config-volume\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315154 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c836c0b9-6574-4e15-b137-801770985326-serving-cert\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cdbg\" (UniqueName: \"kubernetes.io/projected/c836c0b9-6574-4e15-b137-801770985326-kube-api-access-2cdbg\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315184 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-service-ca\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315199 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-metrics-certs\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwvw\" (UniqueName: \"kubernetes.io/projected/eae96488-906d-423e-ad79-b4448ef8ad58-kube-api-access-sjwvw\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-serving-cert\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315294 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-kube-api-access-dks2m\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315335 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-config\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs98d\" (UniqueName: \"kubernetes.io/projected/eb229a90-087c-4c24-a82c-863a6b64b5b3-kube-api-access-cs98d\") pod \"migrator-59844c95c7-lhsmn\" (UID: \"eb229a90-087c-4c24-a82c-863a6b64b5b3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6pv4\" (UniqueName: \"kubernetes.io/projected/40a06246-a0bb-4a09-b184-9b1bca1610b2-kube-api-access-b6pv4\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315408 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltnm\" (UniqueName: \"kubernetes.io/projected/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-kube-api-access-qltnm\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86b11ac2-2a27-4859-b9b2-595d2ca7556d-config-volume\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315448 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f95d9cb-2319-44b8-bf36-58afbed80602-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315465 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98aabd8a-9cfc-4977-b42c-990fc3894455-srv-cert\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315481 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xj9\" (UniqueName: \"kubernetes.io/projected/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-kube-api-access-72xj9\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315499 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4br6\" (UniqueName: \"kubernetes.io/projected/7e88be94-7d66-4265-88ec-89278ebaabb7-kube-api-access-z4br6\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-default-certificate\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-config\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315565 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-client-ca\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315609 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-certificates\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae96488-906d-423e-ad79-b4448ef8ad58-serving-cert\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9nb\" (UniqueName: \"kubernetes.io/projected/ed1e22c8-bc14-4e16-b349-442d188ac881-kube-api-access-tz9nb\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315685 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-config\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv5fh\" (UniqueName: \"kubernetes.io/projected/b3e7ce87-89d1-4be8-801d-88af586408e8-kube-api-access-pv5fh\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgww\" (UniqueName: \"kubernetes.io/projected/61823234-969c-46ce-9c8c-7eb8f41867dc-kube-api-access-4vgww\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315768 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fd0f38b-dcb2-4039-b4b7-bb02593b1aec-metrics-tls\") pod \"dns-operator-744455d44c-wjnvk\" (UID: \"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec\") " pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315814 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dlbl\" (UniqueName: \"kubernetes.io/projected/e38984df-4074-4d74-9a5e-fada1d1d8b1f-kube-api-access-6dlbl\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315845 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61823234-969c-46ce-9c8c-7eb8f41867dc-service-ca-bundle\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-mountpoint-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.315879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-dir\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.316835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-config\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318299 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njg7\" (UniqueName: \"kubernetes.io/projected/582542ee-c2ad-4058-830e-dda091d4507a-kube-api-access-6njg7\") pod \"control-plane-machine-set-operator-78cbb6b69f-9dwhm\" (UID: \"582542ee-c2ad-4058-830e-dda091d4507a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318346 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79b44209-418b-49d3-8a0f-cb90d8928373-metrics-tls\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3e7ce87-89d1-4be8-801d-88af586408e8-node-bootstrap-token\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318403 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80b0b54-e40c-405a-95be-814ced6e11f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98aabd8a-9cfc-4977-b42c-990fc3894455-profile-collector-cert\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318460 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-oauth-serving-cert\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318475 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86b11ac2-2a27-4859-b9b2-595d2ca7556d-secret-volume\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318492 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e88be94-7d66-4265-88ec-89278ebaabb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-registration-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-config\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-trusted-ca\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318567 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-ca\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/582542ee-c2ad-4058-830e-dda091d4507a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9dwhm\" (UID: \"582542ee-c2ad-4058-830e-dda091d4507a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-socket-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e88be94-7d66-4265-88ec-89278ebaabb7-proxy-tls\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318708 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.318797 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319181 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-trusted-ca-bundle\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnjn\" (UniqueName: \"kubernetes.io/projected/86b11ac2-2a27-4859-b9b2-595d2ca7556d-kube-api-access-lqnjn\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319313 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b0b54-e40c-405a-95be-814ced6e11f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319359 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-bound-sa-token\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.319384 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:39.81937683 +0000 UTC m=+142.949069431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319422 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-csi-data-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319443 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f95d9cb-2319-44b8-bf36-58afbed80602-config\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-tls\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319507 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-client\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319526 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lzcl\" (UniqueName: \"kubernetes.io/projected/9ecaafcc-322f-4167-8757-e6a29dcca078-kube-api-access-5lzcl\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ecaafcc-322f-4167-8757-e6a29dcca078-signing-key\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319587 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3628713-9b8f-4bba-8e6e-40803fc6f21b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3628713-9b8f-4bba-8e6e-40803fc6f21b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319653 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e38984df-4074-4d74-9a5e-fada1d1d8b1f-serving-cert\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319673 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8g8\" (UniqueName: \"kubernetes.io/projected/79b44209-418b-49d3-8a0f-cb90d8928373-kube-api-access-9j8g8\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319715 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-serving-cert\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319749 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-plugins-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319776 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnhf\" (UniqueName: \"kubernetes.io/projected/6cfa25cf-ba2d-447c-ba71-685d975786df-kube-api-access-fgnhf\") pod \"ingress-canary-xsfmh\" (UID: \"6cfa25cf-ba2d-447c-ba71-685d975786df\") " pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319796 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmg5j\" (UniqueName: \"kubernetes.io/projected/98aabd8a-9cfc-4977-b42c-990fc3894455-kube-api-access-pmg5j\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319829 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f95d9cb-2319-44b8-bf36-58afbed80602-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319848 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c836c0b9-6574-4e15-b137-801770985326-config\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319863 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-srv-cert\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319879 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6cfa25cf-ba2d-447c-ba71-685d975786df-cert\") pod \"ingress-canary-xsfmh\" (UID: \"6cfa25cf-ba2d-447c-ba71-685d975786df\") " pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319904 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-policies\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319921 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-oauth-config\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3628713-9b8f-4bba-8e6e-40803fc6f21b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319969 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a63239-acb7-4fc9-8f62-ffa55f261901-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.319986 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.320002 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg62c\" (UniqueName: \"kubernetes.io/projected/5fd0f38b-dcb2-4039-b4b7-bb02593b1aec-kube-api-access-vg62c\") pod \"dns-operator-744455d44c-wjnvk\" (UID: \"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec\") " pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.320024 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-service-ca\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.320062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2jv\" (UniqueName: \"kubernetes.io/projected/e80b0b54-e40c-405a-95be-814ced6e11f4-kube-api-access-zr2jv\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.322058 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-trusted-ca\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.324496 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e88be94-7d66-4265-88ec-89278ebaabb7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.324881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-client-ca\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.324965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c836c0b9-6574-4e15-b137-801770985326-config\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.325632 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.326131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a63239-acb7-4fc9-8f62-ffa55f261901-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.326333 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c836c0b9-6574-4e15-b137-801770985326-trusted-ca\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.326447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61823234-969c-46ce-9c8c-7eb8f41867dc-service-ca-bundle\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.326549 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-dir\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.326935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.327330 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.328351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-policies\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.328585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-service-ca\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.328745 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98aabd8a-9cfc-4977-b42c-990fc3894455-srv-cert\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.329905 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-tls\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.329999 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-images\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.330080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.330265 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-ca\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.330829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e38984df-4074-4d74-9a5e-fada1d1d8b1f-config\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.331333 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-stats-auth\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.331617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-config\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.332458 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c836c0b9-6574-4e15-b137-801770985326-serving-cert\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.332512 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fd0f38b-dcb2-4039-b4b7-bb02593b1aec-metrics-tls\") pod \"dns-operator-744455d44c-wjnvk\" (UID: \"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec\") " pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.332515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.332793 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e38984df-4074-4d74-9a5e-fada1d1d8b1f-serving-cert\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.333464 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae96488-906d-423e-ad79-b4448ef8ad58-serving-cert\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.334062 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e38984df-4074-4d74-9a5e-fada1d1d8b1f-etcd-client\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.334255 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-certificates\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.334664 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.334809 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/98aabd8a-9cfc-4977-b42c-990fc3894455-profile-collector-cert\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.335082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a63239-acb7-4fc9-8f62-ffa55f261901-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.335314 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e88be94-7d66-4265-88ec-89278ebaabb7-proxy-tls\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.335324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/582542ee-c2ad-4058-830e-dda091d4507a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9dwhm\" (UID: \"582542ee-c2ad-4058-830e-dda091d4507a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.335899 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.336556 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-metrics-certs\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.336561 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-oauth-serving-cert\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.337164 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.337633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/61823234-969c-46ce-9c8c-7eb8f41867dc-default-certificate\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.338307 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.338817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339091 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-srv-cert\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339214 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-trusted-ca-bundle\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339330 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339816 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.339933 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.343466 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-config\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.348365 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-oauth-config\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.348367 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-service-ca\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.348931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: W1008 06:36:39.357387 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19fe21c7_7d44_447e_808d_3e929bbbb3a8.slice/crio-808ace79a01366f9415fc52d0bd80afe03bddff0679037ef6fb47ecbaff0a6a9 WatchSource:0}: Error finding container 808ace79a01366f9415fc52d0bd80afe03bddff0679037ef6fb47ecbaff0a6a9: Status 404 returned error can't find the container with id 808ace79a01366f9415fc52d0bd80afe03bddff0679037ef6fb47ecbaff0a6a9 Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.357638 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.366617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-serving-cert\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.369596 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/552f0de0-1991-4a6c-8a8c-f44fced1f07f-serving-cert\") pod \"apiserver-76f77b778f-qk24s\" (UID: \"552f0de0-1991-4a6c-8a8c-f44fced1f07f\") " pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.376146 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.385120 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznnw\" (UniqueName: \"kubernetes.io/projected/6c04fc7e-cc6d-4d08-99cc-752b31cb5110-kube-api-access-sznnw\") pod \"machine-api-operator-5694c8668f-clxh5\" (UID: \"6c04fc7e-cc6d-4d08-99cc-752b31cb5110\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.390127 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.404684 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" event={"ID":"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b","Type":"ContainerStarted","Data":"5874ff0b51f7b027903d94db4f4fbe092b2e70b4c0558c7ac47469036b155b9e"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.404725 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" event={"ID":"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b","Type":"ContainerStarted","Data":"ad72fb0bace78c886049003f95729394f58b96c50bb3c65738dfc034ad63f6cb"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.407467 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.415162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" event={"ID":"b964df0d-9992-4be4-900f-68a7b74bceef","Type":"ContainerStarted","Data":"6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.415198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" event={"ID":"b964df0d-9992-4be4-900f-68a7b74bceef","Type":"ContainerStarted","Data":"7137291a9d436ba7bfcb3f49a1a066865a8fa4480ad077201b08a5ae9496ff9d"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.416076 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.419345 4958 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xkpn4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.419393 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" podUID="b964df0d-9992-4be4-900f-68a7b74bceef" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.420585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" event={"ID":"f18c448b-c665-482d-aa20-387343546e7c","Type":"ContainerStarted","Data":"f0683d3e5be9b2d15095f676c93d501b0bf98cc8500aa67cacfab4550c68087a"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423197 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423657 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/835c2c67-ae1f-49bb-9d70-09bf1ac25885-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kmks4\" (UID: \"835c2c67-ae1f-49bb-9d70-09bf1ac25885\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423706 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ecaafcc-322f-4167-8757-e6a29dcca078-signing-cabundle\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423726 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79b44209-418b-49d3-8a0f-cb90d8928373-config-volume\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423769 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6pv4\" (UniqueName: \"kubernetes.io/projected/40a06246-a0bb-4a09-b184-9b1bca1610b2-kube-api-access-b6pv4\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423797 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86b11ac2-2a27-4859-b9b2-595d2ca7556d-config-volume\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f95d9cb-2319-44b8-bf36-58afbed80602-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72xj9\" (UniqueName: \"kubernetes.io/projected/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-kube-api-access-72xj9\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv5fh\" (UniqueName: \"kubernetes.io/projected/b3e7ce87-89d1-4be8-801d-88af586408e8-kube-api-access-pv5fh\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423927 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-mountpoint-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423967 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79b44209-418b-49d3-8a0f-cb90d8928373-metrics-tls\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.423982 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3e7ce87-89d1-4be8-801d-88af586408e8-node-bootstrap-token\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424000 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80b0b54-e40c-405a-95be-814ced6e11f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424019 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86b11ac2-2a27-4859-b9b2-595d2ca7556d-secret-volume\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424037 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-registration-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424051 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-config\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-socket-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b0b54-e40c-405a-95be-814ced6e11f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424147 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqnjn\" (UniqueName: \"kubernetes.io/projected/86b11ac2-2a27-4859-b9b2-595d2ca7556d-kube-api-access-lqnjn\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424177 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-csi-data-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f95d9cb-2319-44b8-bf36-58afbed80602-config\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lzcl\" (UniqueName: \"kubernetes.io/projected/9ecaafcc-322f-4167-8757-e6a29dcca078-kube-api-access-5lzcl\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424257 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ecaafcc-322f-4167-8757-e6a29dcca078-signing-key\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424294 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3628713-9b8f-4bba-8e6e-40803fc6f21b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424319 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3628713-9b8f-4bba-8e6e-40803fc6f21b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424342 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j8g8\" (UniqueName: \"kubernetes.io/projected/79b44209-418b-49d3-8a0f-cb90d8928373-kube-api-access-9j8g8\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424374 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-serving-cert\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424397 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-plugins-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424418 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgnhf\" (UniqueName: \"kubernetes.io/projected/6cfa25cf-ba2d-447c-ba71-685d975786df-kube-api-access-fgnhf\") pod \"ingress-canary-xsfmh\" (UID: \"6cfa25cf-ba2d-447c-ba71-685d975786df\") " pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424472 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f95d9cb-2319-44b8-bf36-58afbed80602-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6cfa25cf-ba2d-447c-ba71-685d975786df-cert\") pod \"ingress-canary-xsfmh\" (UID: \"6cfa25cf-ba2d-447c-ba71-685d975786df\") " pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424523 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3628713-9b8f-4bba-8e6e-40803fc6f21b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424552 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2jv\" (UniqueName: \"kubernetes.io/projected/e80b0b54-e40c-405a-95be-814ced6e11f4-kube-api-access-zr2jv\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424569 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xblb2\" (UniqueName: \"kubernetes.io/projected/835c2c67-ae1f-49bb-9d70-09bf1ac25885-kube-api-access-xblb2\") pod \"package-server-manager-789f6589d5-kmks4\" (UID: \"835c2c67-ae1f-49bb-9d70-09bf1ac25885\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.424595 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3e7ce87-89d1-4be8-801d-88af586408e8-certs\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.425312 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:39.925294791 +0000 UTC m=+143.054987392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.426434 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ecaafcc-322f-4167-8757-e6a29dcca078-signing-cabundle\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.426665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-csi-data-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.426884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-plugins-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.427480 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79b44209-418b-49d3-8a0f-cb90d8928373-config-volume\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.427525 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86b11ac2-2a27-4859-b9b2-595d2ca7556d-config-volume\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.427555 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-registration-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.427853 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f95d9cb-2319-44b8-bf36-58afbed80602-config\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.428068 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-socket-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.428169 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80b0b54-e40c-405a-95be-814ced6e11f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.428317 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/40a06246-a0bb-4a09-b184-9b1bca1610b2-mountpoint-dir\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.429335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" event={"ID":"19fe21c7-7d44-447e-808d-3e929bbbb3a8","Type":"ContainerStarted","Data":"808ace79a01366f9415fc52d0bd80afe03bddff0679037ef6fb47ecbaff0a6a9"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.429859 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-config\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.430443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527rf\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-kube-api-access-527rf\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.430674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3628713-9b8f-4bba-8e6e-40803fc6f21b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.430845 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3628713-9b8f-4bba-8e6e-40803fc6f21b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.431436 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ecaafcc-322f-4167-8757-e6a29dcca078-signing-key\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.431613 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3e7ce87-89d1-4be8-801d-88af586408e8-certs\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.432022 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-serving-cert\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.432044 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3e7ce87-89d1-4be8-801d-88af586408e8-node-bootstrap-token\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.432805 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" event={"ID":"0bf6b0ee-ce56-4b87-92ea-1eb785eec586","Type":"ContainerStarted","Data":"174c5affb65620547b314c1885860c4fe56071ffe3b6f707f1abe68a4f9550f1"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.432832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" event={"ID":"0bf6b0ee-ce56-4b87-92ea-1eb785eec586","Type":"ContainerStarted","Data":"2fd59212e4f06d9d7b938f15a6a754f78e65df5c8ff2cb5e95228e8e6156967f"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.433618 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86b11ac2-2a27-4859-b9b2-595d2ca7556d-secret-volume\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.434443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/835c2c67-ae1f-49bb-9d70-09bf1ac25885-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kmks4\" (UID: \"835c2c67-ae1f-49bb-9d70-09bf1ac25885\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.435108 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" event={"ID":"25a337ec-7967-4941-b3b8-d78ef7e1eab1","Type":"ContainerStarted","Data":"b97ca8cfe0203d659a7fb1342667e2ccd9a70cde871b5d72c43455e8f56bc4cb"} Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.435868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79b44209-418b-49d3-8a0f-cb90d8928373-metrics-tls\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.436879 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b0b54-e40c-405a-95be-814ced6e11f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.443253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6cfa25cf-ba2d-447c-ba71-685d975786df-cert\") pod \"ingress-canary-xsfmh\" (UID: \"6cfa25cf-ba2d-447c-ba71-685d975786df\") " pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.449762 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f95d9cb-2319-44b8-bf36-58afbed80602-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.451369 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltnm\" (UniqueName: \"kubernetes.io/projected/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-kube-api-access-qltnm\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.468447 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.475470 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-bound-sa-token\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.489484 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.493159 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.496796 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw84p\" (UniqueName: \"kubernetes.io/projected/f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7-kube-api-access-nw84p\") pod \"olm-operator-6b444d44fb-swm4r\" (UID: \"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.506542 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wj4ql"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.511616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmg5j\" (UniqueName: \"kubernetes.io/projected/98aabd8a-9cfc-4977-b42c-990fc3894455-kube-api-access-pmg5j\") pod \"catalog-operator-68c6474976-8scdv\" (UID: \"98aabd8a-9cfc-4977-b42c-990fc3894455\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: W1008 06:36:39.513091 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6d60dee_41d3_4b20_99ea_da1db49f2548.slice/crio-4108d7b948b20b71a66e099bc8daeb896d0ebb6b4f3db08e7146c97a1686a329 WatchSource:0}: Error finding container 4108d7b948b20b71a66e099bc8daeb896d0ebb6b4f3db08e7146c97a1686a329: Status 404 returned error can't find the container with id 4108d7b948b20b71a66e099bc8daeb896d0ebb6b4f3db08e7146c97a1686a329 Oct 08 06:36:39 crc kubenswrapper[4958]: W1008 06:36:39.513354 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode76ccf7f_2deb_4e19_9fdc_1f0169c2aa02.slice/crio-045e5d5d88785ebac12c62c809d545d7922f3192882c3a04be551e452ba58ad0 WatchSource:0}: Error finding container 045e5d5d88785ebac12c62c809d545d7922f3192882c3a04be551e452ba58ad0: Status 404 returned error can't find the container with id 045e5d5d88785ebac12c62c809d545d7922f3192882c3a04be551e452ba58ad0 Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.525297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.525563 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.025553273 +0000 UTC m=+143.155245874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.535413 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs98d\" (UniqueName: \"kubernetes.io/projected/eb229a90-087c-4c24-a82c-863a6b64b5b3-kube-api-access-cs98d\") pod \"migrator-59844c95c7-lhsmn\" (UID: \"eb229a90-087c-4c24-a82c-863a6b64b5b3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.549777 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4br6\" (UniqueName: \"kubernetes.io/projected/7e88be94-7d66-4265-88ec-89278ebaabb7-kube-api-access-z4br6\") pod \"machine-config-controller-84d6567774-kjkv4\" (UID: \"7e88be94-7d66-4265-88ec-89278ebaabb7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.566534 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.568518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9nb\" (UniqueName: \"kubernetes.io/projected/ed1e22c8-bc14-4e16-b349-442d188ac881-kube-api-access-tz9nb\") pod \"oauth-openshift-558db77b4-mhg9b\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.580333 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.594649 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dlbl\" (UniqueName: \"kubernetes.io/projected/e38984df-4074-4d74-9a5e-fada1d1d8b1f-kube-api-access-6dlbl\") pod \"etcd-operator-b45778765-7mwpk\" (UID: \"e38984df-4074-4d74-9a5e-fada1d1d8b1f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.616545 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cqh7v"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.621550 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9fc2a7f0-7ceb-465a-8511-a7e3e665c563-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tv62\" (UID: \"9fc2a7f0-7ceb-465a-8511-a7e3e665c563\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.625863 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-d22g5"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.626405 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.626437 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j"] Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.626569 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.126538494 +0000 UTC m=+143.256231105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.626904 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.627425 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.127415088 +0000 UTC m=+143.257107689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.629268 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.634379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njg7\" (UniqueName: \"kubernetes.io/projected/582542ee-c2ad-4058-830e-dda091d4507a-kube-api-access-6njg7\") pod \"control-plane-machine-set-operator-78cbb6b69f-9dwhm\" (UID: \"582542ee-c2ad-4058-830e-dda091d4507a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: W1008 06:36:39.640499 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0381af03_9a83_4a43_ab4f_79cbd5d3351f.slice/crio-f6aedad1d301265d173147ad7ab9b8552e9b6d939f16c6a7b181fb59d376326f WatchSource:0}: Error finding container f6aedad1d301265d173147ad7ab9b8552e9b6d939f16c6a7b181fb59d376326f: Status 404 returned error can't find the container with id f6aedad1d301265d173147ad7ab9b8552e9b6d939f16c6a7b181fb59d376326f Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.652358 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.653202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cdbg\" (UniqueName: \"kubernetes.io/projected/c836c0b9-6574-4e15-b137-801770985326-kube-api-access-2cdbg\") pod \"console-operator-58897d9998-982gr\" (UID: \"c836c0b9-6574-4e15-b137-801770985326\") " pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: W1008 06:36:39.653463 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod843bf41d_d355_41ba_8073_4a53421c09a6.slice/crio-aa8580d745fc86c6a212972e3aa12a7f20ac876046477038b35d49313e850b3c WatchSource:0}: Error finding container aa8580d745fc86c6a212972e3aa12a7f20ac876046477038b35d49313e850b3c: Status 404 returned error can't find the container with id aa8580d745fc86c6a212972e3aa12a7f20ac876046477038b35d49313e850b3c Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.654491 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.663298 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.666264 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.670117 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg62c\" (UniqueName: \"kubernetes.io/projected/5fd0f38b-dcb2-4039-b4b7-bb02593b1aec-kube-api-access-vg62c\") pod \"dns-operator-744455d44c-wjnvk\" (UID: \"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec\") " pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.674319 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.698900 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.699768 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgww\" (UniqueName: \"kubernetes.io/projected/61823234-969c-46ce-9c8c-7eb8f41867dc-kube-api-access-4vgww\") pod \"router-default-5444994796-fkwgq\" (UID: \"61823234-969c-46ce-9c8c-7eb8f41867dc\") " pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.715499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-kube-api-access-dks2m\") pod \"console-f9d7485db-86rhc\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.715745 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.726451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.727548 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.727936 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.22791885 +0000 UTC m=+143.357611451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.730344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwvw\" (UniqueName: \"kubernetes.io/projected/eae96488-906d-423e-ad79-b4448ef8ad58-kube-api-access-sjwvw\") pod \"route-controller-manager-6576b87f9c-7sb7t\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.733441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.742477 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.750477 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b5d6bdd-539f-4a63-b52c-ae0cfa92703f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ktklm\" (UID: \"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.790421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqnjn\" (UniqueName: \"kubernetes.io/projected/86b11ac2-2a27-4859-b9b2-595d2ca7556d-kube-api-access-lqnjn\") pod \"collect-profiles-29331750-99jww\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.807698 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.808250 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.815580 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6pv4\" (UniqueName: \"kubernetes.io/projected/40a06246-a0bb-4a09-b184-9b1bca1610b2-kube-api-access-b6pv4\") pod \"csi-hostpathplugin-5kqn9\" (UID: \"40a06246-a0bb-4a09-b184-9b1bca1610b2\") " pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.829972 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.830349 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.330321265 +0000 UTC m=+143.460013866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.832164 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mhg9b"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.838767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xblb2\" (UniqueName: \"kubernetes.io/projected/835c2c67-ae1f-49bb-9d70-09bf1ac25885-kube-api-access-xblb2\") pod \"package-server-manager-789f6589d5-kmks4\" (UID: \"835c2c67-ae1f-49bb-9d70-09bf1ac25885\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.849647 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.853515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2jv\" (UniqueName: \"kubernetes.io/projected/e80b0b54-e40c-405a-95be-814ced6e11f4-kube-api-access-zr2jv\") pod \"kube-storage-version-migrator-operator-b67b599dd-47jrl\" (UID: \"e80b0b54-e40c-405a-95be-814ced6e11f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.861123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.872298 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgnhf\" (UniqueName: \"kubernetes.io/projected/6cfa25cf-ba2d-447c-ba71-685d975786df-kube-api-access-fgnhf\") pod \"ingress-canary-xsfmh\" (UID: \"6cfa25cf-ba2d-447c-ba71-685d975786df\") " pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.906056 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qk24s"] Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.908915 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f95d9cb-2319-44b8-bf36-58afbed80602-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jrx6t\" (UID: \"5f95d9cb-2319-44b8-bf36-58afbed80602\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.913492 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3628713-9b8f-4bba-8e6e-40803fc6f21b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w7xcj\" (UID: \"c3628713-9b8f-4bba-8e6e-40803fc6f21b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.921615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.930816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:39 crc kubenswrapper[4958]: E1008 06:36:39.931160 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.431143949 +0000 UTC m=+143.560836550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.936247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j8g8\" (UniqueName: \"kubernetes.io/projected/79b44209-418b-49d3-8a0f-cb90d8928373-kube-api-access-9j8g8\") pod \"dns-default-wx8p8\" (UID: \"79b44209-418b-49d3-8a0f-cb90d8928373\") " pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.939172 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.952616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lzcl\" (UniqueName: \"kubernetes.io/projected/9ecaafcc-322f-4167-8757-e6a29dcca078-kube-api-access-5lzcl\") pod \"service-ca-9c57cc56f-n4pr7\" (UID: \"9ecaafcc-322f-4167-8757-e6a29dcca078\") " pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:39 crc kubenswrapper[4958]: I1008 06:36:39.970218 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv5fh\" (UniqueName: \"kubernetes.io/projected/b3e7ce87-89d1-4be8-801d-88af586408e8-kube-api-access-pv5fh\") pod \"machine-config-server-5tfs7\" (UID: \"b3e7ce87-89d1-4be8-801d-88af586408e8\") " pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.037220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72xj9\" (UniqueName: \"kubernetes.io/projected/87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4-kube-api-access-72xj9\") pod \"service-ca-operator-777779d784-tbtkv\" (UID: \"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.038622 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.039015 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.539002546 +0000 UTC m=+143.668695147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.065009 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.065354 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.074042 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.086217 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.092058 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.105273 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.117887 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xsfmh" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.124738 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5tfs7" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.131747 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.140520 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.140763 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.640734085 +0000 UTC m=+143.770426686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.170801 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.216561 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wjnvk"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.216597 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.228933 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.246775 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.247143 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.747128995 +0000 UTC m=+143.876821596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.287617 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-clxh5"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.353265 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.353406 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.853377529 +0000 UTC m=+143.983070130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.353784 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.354089 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.854077026 +0000 UTC m=+143.983769627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.454417 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.454980 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:40.954965393 +0000 UTC m=+144.084657994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.470675 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" event={"ID":"fce98ad8-5ce1-4dd7-81a0-68606ba22262","Type":"ContainerStarted","Data":"fa65a591efd7e2bd108249901c495c32d33561148b2ff9777d464f375b415c4b"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.470716 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" event={"ID":"fce98ad8-5ce1-4dd7-81a0-68606ba22262","Type":"ContainerStarted","Data":"ba792a288bc888fb247f81c3be7f5a17a7484a550d23feb622088da941f56f5e"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.470881 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.472565 4958 generic.go:334] "Generic (PLEG): container finished" podID="19fe21c7-7d44-447e-808d-3e929bbbb3a8" containerID="692897eb7c3f32edfe51fbc0c1c8bf4ae5f7b7bf437298414a501064f62e0ee4" exitCode=0 Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.472608 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" event={"ID":"19fe21c7-7d44-447e-808d-3e929bbbb3a8","Type":"ContainerDied","Data":"692897eb7c3f32edfe51fbc0c1c8bf4ae5f7b7bf437298414a501064f62e0ee4"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.474543 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wj4ql container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.474585 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.476593 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" event={"ID":"95346bab-d029-433b-94ce-1de7c608d2fe","Type":"ContainerStarted","Data":"3ef5675d9b9e2bddf8bc31030016c8fa74bf755b77487037b3769a50f4498a87"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.476633 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" event={"ID":"95346bab-d029-433b-94ce-1de7c608d2fe","Type":"ContainerStarted","Data":"a936054e12c237ac84ec016277a399409ebdfa4b776eebdb7237eb4978e811ab"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.476644 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" event={"ID":"95346bab-d029-433b-94ce-1de7c608d2fe","Type":"ContainerStarted","Data":"1e35b5fc32fd5c34f055ba184e4b2098d4e63834ee8236ba19c4c0bd8db72b09"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.478928 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" event={"ID":"552f0de0-1991-4a6c-8a8c-f44fced1f07f","Type":"ContainerStarted","Data":"d6e6831eaeb0f1b5f0eae4a9ad03a029ca52947292d679258d9e6432c8fecbed"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.479649 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" event={"ID":"eb229a90-087c-4c24-a82c-863a6b64b5b3","Type":"ContainerStarted","Data":"ed60a0a7ae1d9aae17f1d0aac240c6124092ca25ef3db19d67276882b2741691"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.488956 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" event={"ID":"f18c448b-c665-482d-aa20-387343546e7c","Type":"ContainerStarted","Data":"3170fbcfeb95569f5e2e6873415eecc12920474049219b56d2c3395240ffb63f"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.488994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" event={"ID":"f18c448b-c665-482d-aa20-387343546e7c","Type":"ContainerStarted","Data":"07e5d2eef861db105dd790330482ae91ca29dd9ba206b3aa1293d188fc28eac3"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.490356 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" event={"ID":"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9","Type":"ContainerStarted","Data":"43c3a329bf51b3324a4f404e07339c569fafce7f50c5dd1c53ea0258ccdbb116"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.495102 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" event={"ID":"843bf41d-d355-41ba-8073-4a53421c09a6","Type":"ContainerStarted","Data":"aee973088e92a550e399af2bd7822439ae7baf676ee00e4e19a347d219b0ebf8"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.495152 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" event={"ID":"843bf41d-d355-41ba-8073-4a53421c09a6","Type":"ContainerStarted","Data":"aa8580d745fc86c6a212972e3aa12a7f20ac876046477038b35d49313e850b3c"} Oct 08 06:36:40 crc kubenswrapper[4958]: W1008 06:36:40.511784 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c04fc7e_cc6d_4d08_99cc_752b31cb5110.slice/crio-bccb6aeee5146078a4578877f0bf254d47a8b024ff4265ffa76fcfd03508af9f WatchSource:0}: Error finding container bccb6aeee5146078a4578877f0bf254d47a8b024ff4265ffa76fcfd03508af9f: Status 404 returned error can't find the container with id bccb6aeee5146078a4578877f0bf254d47a8b024ff4265ffa76fcfd03508af9f Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.512490 4958 generic.go:334] "Generic (PLEG): container finished" podID="25a337ec-7967-4941-b3b8-d78ef7e1eab1" containerID="aab5493647219657e9cc647d6e1673b5170d334f049b76919efa7fb2baf2d3d1" exitCode=0 Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.513023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" event={"ID":"25a337ec-7967-4941-b3b8-d78ef7e1eab1","Type":"ContainerDied","Data":"aab5493647219657e9cc647d6e1673b5170d334f049b76919efa7fb2baf2d3d1"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.551189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" event={"ID":"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02","Type":"ContainerStarted","Data":"6ab13d65b05847304c8b77889af2420fbbd2d64550611ef8c3494a43284a5968"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.551584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" event={"ID":"e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02","Type":"ContainerStarted","Data":"045e5d5d88785ebac12c62c809d545d7922f3192882c3a04be551e452ba58ad0"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.552449 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.555814 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.557154 4958 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cw96f container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" start-of-body= Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.557204 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" podUID="e76ccf7f-2deb-4e19-9fdc-1f0169c2aa02" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.19:5443/healthz\": dial tcp 10.217.0.19:5443: connect: connection refused" Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.558293 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.058279554 +0000 UTC m=+144.187972145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.559458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" event={"ID":"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2","Type":"ContainerStarted","Data":"3e45ff5dd6564428c5a08dc7a676a1c377a141d741ee3a149ddb8144f797e062"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.559484 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" event={"ID":"a0dfe85b-af34-44a9-b7b2-a56cb6f192a2","Type":"ContainerStarted","Data":"373cd276e4a1b50c3095c5d945fc02e9a34d2389980ac9ccefaa5036ceda8c96"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.592811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fkwgq" event={"ID":"61823234-969c-46ce-9c8c-7eb8f41867dc","Type":"ContainerStarted","Data":"d7204fc51a296cda290e891f9b68d3557849dae676a1ba6e0331da2b082e407a"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.640500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" event={"ID":"4869cf7c-e6e5-4ecf-a14d-ca6317cebc7b","Type":"ContainerStarted","Data":"4a4535af188601aa9ebdb9f091909ff30fb826f4e858bfe09633edbe479db01c"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.643658 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" event={"ID":"ed1e22c8-bc14-4e16-b349-442d188ac881","Type":"ContainerStarted","Data":"68c0d2b08f61762b0c9f135288c6a4d3119cfe963b64ecf7e24a231ea1f50e83"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.651041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5tfs7" event={"ID":"b3e7ce87-89d1-4be8-801d-88af586408e8","Type":"ContainerStarted","Data":"09a2e3af7099496a2503d0eb374745dfbec397869aaa2bc7e2e3cc0778599313"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.653431 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" event={"ID":"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7","Type":"ContainerStarted","Data":"b35a65e23e80f7c731d9f04edab014f8398fe06b3a3b58cb61a811facbad84cf"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.662649 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.664074 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.16405585 +0000 UTC m=+144.293748451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.731001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" event={"ID":"9fc2a7f0-7ceb-465a-8511-a7e3e665c563","Type":"ContainerStarted","Data":"57bf7820ac47c013c5a462c7b00046de609fc5c53728e3c80f653a5d49294226"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.735772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-d22g5" event={"ID":"0381af03-9a83-4a43-ab4f-79cbd5d3351f","Type":"ContainerStarted","Data":"ca93a9fa8ef93372276eff34f07de646bce096a9f1c214880d2665d794b7dee3"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.735798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-d22g5" event={"ID":"0381af03-9a83-4a43-ab4f-79cbd5d3351f","Type":"ContainerStarted","Data":"f6aedad1d301265d173147ad7ab9b8552e9b6d939f16c6a7b181fb59d376326f"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.736900 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.741648 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" event={"ID":"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec","Type":"ContainerStarted","Data":"2fc5e548f7f275088a9a3f8221827425ea6bae62831f264f0a0f01c624f8a0cc"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.747044 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" event={"ID":"98aabd8a-9cfc-4977-b42c-990fc3894455","Type":"ContainerStarted","Data":"254ef47c674a54f439c7ebf5ee3aeb14f855f24594bd93abfbaa2ed6708e6fc8"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.759071 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-d22g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.759117 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d22g5" podUID="0381af03-9a83-4a43-ab4f-79cbd5d3351f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.768652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.768975 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.268962942 +0000 UTC m=+144.398655533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.776607 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kqn9"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.783200 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.789348 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" event={"ID":"d6d60dee-41d3-4b20-99ea-da1db49f2548","Type":"ContainerStarted","Data":"0fe749492fa0c631c98ab39343e21fb54ebb2f18ef31d3b390797c1d04d5976d"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.789635 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" event={"ID":"d6d60dee-41d3-4b20-99ea-da1db49f2548","Type":"ContainerStarted","Data":"4108d7b948b20b71a66e099bc8daeb896d0ebb6b4f3db08e7146c97a1686a329"} Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.795938 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-982gr"] Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.814510 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.870475 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.871510 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.371495552 +0000 UTC m=+144.501188153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:40 crc kubenswrapper[4958]: I1008 06:36:40.972097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:40 crc kubenswrapper[4958]: E1008 06:36:40.972740 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.472727662 +0000 UTC m=+144.602420263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.073397 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.073745 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.573726033 +0000 UTC m=+144.703418634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.111623 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vq2s" podStartSLOduration=124.11159822 podStartE2EDuration="2m4.11159822s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.10489392 +0000 UTC m=+144.234586531" watchObservedRunningTime="2025-10-08 06:36:41.11159822 +0000 UTC m=+144.241290821" Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.112342 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-86rhc"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.120098 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7mwpk"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.134598 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.164054 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.174809 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.175212 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.675199302 +0000 UTC m=+144.804891903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.275919 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.276050 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.776027537 +0000 UTC m=+144.905720138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.276085 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.276400 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.776391961 +0000 UTC m=+144.906084562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.361342 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pggjp" podStartSLOduration=125.36132329 podStartE2EDuration="2m5.36132329s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.358708688 +0000 UTC m=+144.488401289" watchObservedRunningTime="2025-10-08 06:36:41.36132329 +0000 UTC m=+144.491015891" Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.377223 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.377541 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.877522287 +0000 UTC m=+145.007214888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.478278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.478741 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:41.978728766 +0000 UTC m=+145.108421367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: W1008 06:36:41.544212 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod582542ee_c2ad_4058_830e_dda091d4507a.slice/crio-1f859a482534a9d35c567ba7a11bfcb0d8ee46408657891ca61dc9131e1cf701 WatchSource:0}: Error finding container 1f859a482534a9d35c567ba7a11bfcb0d8ee46408657891ca61dc9131e1cf701: Status 404 returned error can't find the container with id 1f859a482534a9d35c567ba7a11bfcb0d8ee46408657891ca61dc9131e1cf701 Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.561072 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" podStartSLOduration=124.561055314 podStartE2EDuration="2m4.561055314s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.559434251 +0000 UTC m=+144.689126852" watchObservedRunningTime="2025-10-08 06:36:41.561055314 +0000 UTC m=+144.690747915" Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.579827 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.580192 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.080177544 +0000 UTC m=+145.209870145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.682737 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.683058 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.183044698 +0000 UTC m=+145.312737299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.750230 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g22b6" podStartSLOduration=125.750194328 podStartE2EDuration="2m5.750194328s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.740554025 +0000 UTC m=+144.870246626" watchObservedRunningTime="2025-10-08 06:36:41.750194328 +0000 UTC m=+144.879886949" Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.790586 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.814602 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qsfbc" podStartSLOduration=124.814579221 podStartE2EDuration="2m4.814579221s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.773391716 +0000 UTC m=+144.903084317" watchObservedRunningTime="2025-10-08 06:36:41.814579221 +0000 UTC m=+144.944271822" Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.846815 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" podStartSLOduration=124.846793829 podStartE2EDuration="2m4.846793829s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.84658276 +0000 UTC m=+144.976275351" watchObservedRunningTime="2025-10-08 06:36:41.846793829 +0000 UTC m=+144.976486430" Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.848348 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.318096967 +0000 UTC m=+145.447789568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.888039 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" event={"ID":"582542ee-c2ad-4058-830e-dda091d4507a","Type":"ContainerStarted","Data":"1f859a482534a9d35c567ba7a11bfcb0d8ee46408657891ca61dc9131e1cf701"} Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.892315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" event={"ID":"e38984df-4074-4d74-9a5e-fada1d1d8b1f","Type":"ContainerStarted","Data":"be0f5719140d27190d99a3b719322723fdd3808d349de921b88354b22514ab03"} Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.903300 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.922055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:41 crc kubenswrapper[4958]: E1008 06:36:41.922346 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.422335684 +0000 UTC m=+145.552028285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.933451 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n4pr7"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.959013 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" event={"ID":"843bf41d-d355-41ba-8073-4a53421c09a6","Type":"ContainerStarted","Data":"4768328158f33b11fee8a4a7513bf344463140aec0b4b26ab01fea852e29bee7"} Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.961709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.971322 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4"] Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.971765 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" event={"ID":"eb229a90-087c-4c24-a82c-863a6b64b5b3","Type":"ContainerStarted","Data":"4c37de7925089a26359a76b472df5f6e90a54eaba034390e15fc818f37a8fb13"} Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.971910 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-c9g9c" podStartSLOduration=125.971881882 podStartE2EDuration="2m5.971881882s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:41.963108733 +0000 UTC m=+145.092801334" watchObservedRunningTime="2025-10-08 06:36:41.971881882 +0000 UTC m=+145.101574483" Oct 08 06:36:41 crc kubenswrapper[4958]: I1008 06:36:41.984006 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t"] Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.018647 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" event={"ID":"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9","Type":"ContainerStarted","Data":"7632f66482eed7370a940c1389f5bd3aca5dbf3a82547c63e378b7c614d30824"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.020117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-86rhc" event={"ID":"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd","Type":"ContainerStarted","Data":"ff23ea4fb74c0059ef3714beb0d29d64d1a939ab6724b15014484c5b137f33ca"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.020749 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl"] Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.021652 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" event={"ID":"6c04fc7e-cc6d-4d08-99cc-752b31cb5110","Type":"ContainerStarted","Data":"bccb6aeee5146078a4578877f0bf254d47a8b024ff4265ffa76fcfd03508af9f"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.022813 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.023880 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.523863265 +0000 UTC m=+145.653555866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.045816 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" event={"ID":"40a06246-a0bb-4a09-b184-9b1bca1610b2","Type":"ContainerStarted","Data":"c49b8405e445e06d9c82e9dfbdc96926cdcb9f76e79d8cc64781c8c3febebbfa"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.072361 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" event={"ID":"7e88be94-7d66-4265-88ec-89278ebaabb7","Type":"ContainerStarted","Data":"7de9df4f04a43206142d6db70b484d275e12807e15e395fbf6d91cc46b4e7a69"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.081832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-982gr" event={"ID":"c836c0b9-6574-4e15-b137-801770985326","Type":"ContainerStarted","Data":"03b59474792b386ebb43ed900190a45acc316ab74ee8bfb5ee385fc77c6664dd"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.091299 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-p6pfq" podStartSLOduration=126.091280796 podStartE2EDuration="2m6.091280796s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.033183706 +0000 UTC m=+145.162876307" watchObservedRunningTime="2025-10-08 06:36:42.091280796 +0000 UTC m=+145.220973397" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.116476 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-d22g5" podStartSLOduration=125.116457251 podStartE2EDuration="2m5.116457251s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.094600954 +0000 UTC m=+145.224293555" watchObservedRunningTime="2025-10-08 06:36:42.116457251 +0000 UTC m=+145.246149852" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.117774 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fkwgq" event={"ID":"61823234-969c-46ce-9c8c-7eb8f41867dc","Type":"ContainerStarted","Data":"cdc87b4568e7e67517d364db5bec7b17a60b6472e6e91208aa5db6434d12a990"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.124766 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.125159 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.625145547 +0000 UTC m=+145.754838148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.183711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.198501 4958 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mhg9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.198566 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.204566 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" podStartSLOduration=125.204550032 podStartE2EDuration="2m5.204550032s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.203283093 +0000 UTC m=+145.332975694" watchObservedRunningTime="2025-10-08 06:36:42.204550032 +0000 UTC m=+145.334242633" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.214808 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xsfmh"] Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.218861 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv"] Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.222969 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t"] Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.226833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.227730 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.727714739 +0000 UTC m=+145.857407340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.246705 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5tfs7" event={"ID":"b3e7ce87-89d1-4be8-801d-88af586408e8","Type":"ContainerStarted","Data":"22cfe233a139a20a502871110d9ba2e9acc5ed18d1a73190f231ca7677caf883"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.267791 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" event={"ID":"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f","Type":"ContainerStarted","Data":"b6e7004525bd9a8be58fc3846d679e03d06b8e28d6a497ba04dfc28b18922543"} Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.270035 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wj4ql container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.270083 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.270099 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-d22g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.270156 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d22g5" podUID="0381af03-9a83-4a43-ab4f-79cbd5d3351f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.282391 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cw96f" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.284272 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" podStartSLOduration=126.284245668 podStartE2EDuration="2m6.284245668s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.258620146 +0000 UTC m=+145.388312747" watchObservedRunningTime="2025-10-08 06:36:42.284245668 +0000 UTC m=+145.413938269" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.314105 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wx8p8"] Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.315169 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fkwgq" podStartSLOduration=125.315151635 podStartE2EDuration="2m5.315151635s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.312373277 +0000 UTC m=+145.442065878" watchObservedRunningTime="2025-10-08 06:36:42.315151635 +0000 UTC m=+145.444844226" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.329095 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.333214 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.833198504 +0000 UTC m=+145.962891105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.364295 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lwm8j" podStartSLOduration=125.364275757 podStartE2EDuration="2m5.364275757s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.361470328 +0000 UTC m=+145.491162929" watchObservedRunningTime="2025-10-08 06:36:42.364275757 +0000 UTC m=+145.493968358" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.395878 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5tfs7" podStartSLOduration=6.39585705 podStartE2EDuration="6.39585705s" podCreationTimestamp="2025-10-08 06:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:42.394280169 +0000 UTC m=+145.523972780" watchObservedRunningTime="2025-10-08 06:36:42.39585705 +0000 UTC m=+145.525549651" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.433329 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.434139 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.934117121 +0000 UTC m=+146.063809722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.434280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.434691 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:42.934678733 +0000 UTC m=+146.064371334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: W1008 06:36:42.463632 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79b44209_418b_49d3_8a0f_cb90d8928373.slice/crio-b987dedacac3210e6240f89a5ba9e4bb269436539a4b32acdc984b3251a18196 WatchSource:0}: Error finding container b987dedacac3210e6240f89a5ba9e4bb269436539a4b32acdc984b3251a18196: Status 404 returned error can't find the container with id b987dedacac3210e6240f89a5ba9e4bb269436539a4b32acdc984b3251a18196 Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.536703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.537104 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.037089529 +0000 UTC m=+146.166782130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.639674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.640284 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.140249182 +0000 UTC m=+146.269941773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.741183 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.741501 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.241472902 +0000 UTC m=+146.371165503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.742930 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.764365 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:42 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:42 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:42 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.764417 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.842503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.843155 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.343143759 +0000 UTC m=+146.472836360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:42 crc kubenswrapper[4958]: I1008 06:36:42.943684 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:42 crc kubenswrapper[4958]: E1008 06:36:42.943815 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.443795717 +0000 UTC m=+146.573488318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.052201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.052685 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.552668823 +0000 UTC m=+146.682361424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.153272 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.153591 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.65357476 +0000 UTC m=+146.783267351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.254708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.255361 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.755349381 +0000 UTC m=+146.885041982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.279715 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" event={"ID":"6c04fc7e-cc6d-4d08-99cc-752b31cb5110","Type":"ContainerStarted","Data":"a6f116cc9841e9e93cc2b382623a04fceb224deb969a6ebdac8f1cbf5451aa99"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.279768 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" event={"ID":"6c04fc7e-cc6d-4d08-99cc-752b31cb5110","Type":"ContainerStarted","Data":"81fa34d4cdac451c80b443ef51edf6d12d1bd87a5afad6ad562aad1a421bcd65"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.284616 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" event={"ID":"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4","Type":"ContainerStarted","Data":"9f21830aae9e9ceeee774ed9cc0384c548cf900bd06e0a6b077c21653d19f1de"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.284653 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" event={"ID":"87beafeb-41c2-4d7f-93ab-0b12c4f2e5c4","Type":"ContainerStarted","Data":"83083310ef2ecc855c96faf137a443742cfb294bf72fa950287e6ba9b3c88dbd"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.288260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" event={"ID":"eb229a90-087c-4c24-a82c-863a6b64b5b3","Type":"ContainerStarted","Data":"489ea6c2c2b45d2a46b555f8ef1004d5246592e06ac7f16184905bc80721670f"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.290292 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" event={"ID":"e80b0b54-e40c-405a-95be-814ced6e11f4","Type":"ContainerStarted","Data":"53891f7a82e5bea49dd0c47738891e76a8672f23f0f321136acf22672cca94a6"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.290312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" event={"ID":"e80b0b54-e40c-405a-95be-814ced6e11f4","Type":"ContainerStarted","Data":"1f414b167031ce02f6181c1583ac1eaed9a4377034667d938e8388478d5b0023"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.295552 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" event={"ID":"2b5d6bdd-539f-4a63-b52c-ae0cfa92703f","Type":"ContainerStarted","Data":"71751ae0d6503a9e29f22592d307209d99a30d07c3b72a11b2c8cc2a6fda3739"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.300136 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-clxh5" podStartSLOduration=126.300124555 podStartE2EDuration="2m6.300124555s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.297788784 +0000 UTC m=+146.427481395" watchObservedRunningTime="2025-10-08 06:36:43.300124555 +0000 UTC m=+146.429817156" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.305782 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" event={"ID":"7e88be94-7d66-4265-88ec-89278ebaabb7","Type":"ContainerStarted","Data":"a77aae774467e36b296947487a98ebde985fdf994565e0645bf53c0d607db107"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.305809 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" event={"ID":"7e88be94-7d66-4265-88ec-89278ebaabb7","Type":"ContainerStarted","Data":"99cd8d98e3c853440aeec46ae4ec96f371d87cf6c664779c1b831bec82455140"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.311203 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xsfmh" event={"ID":"6cfa25cf-ba2d-447c-ba71-685d975786df","Type":"ContainerStarted","Data":"e1a2f80337f6a96615c40f5a55a0691a84f871eb99130086531f21a2848e3be6"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.311248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xsfmh" event={"ID":"6cfa25cf-ba2d-447c-ba71-685d975786df","Type":"ContainerStarted","Data":"e6a1b9e21133dcc1350916d4ae6b47649293b4bd5b3a683ba6e805f1cfba9410"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.332813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-86rhc" event={"ID":"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd","Type":"ContainerStarted","Data":"86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.342400 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbtkv" podStartSLOduration=126.342381401 podStartE2EDuration="2m6.342381401s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.320186012 +0000 UTC m=+146.449878613" watchObservedRunningTime="2025-10-08 06:36:43.342381401 +0000 UTC m=+146.472074002" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.342496 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-47jrl" podStartSLOduration=126.342492715 podStartE2EDuration="2m6.342492715s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.340407585 +0000 UTC m=+146.470100176" watchObservedRunningTime="2025-10-08 06:36:43.342492715 +0000 UTC m=+146.472185316" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.349354 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" event={"ID":"9fc2a7f0-7ceb-465a-8511-a7e3e665c563","Type":"ContainerStarted","Data":"20a32011ef6c114118398b8884d97deef0f4a6fb2fcae20baca0a8de6bdacac9"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.356155 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.356507 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.856493287 +0000 UTC m=+146.986185888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.357890 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lhsmn" podStartSLOduration=126.357878721 podStartE2EDuration="2m6.357878721s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.357563269 +0000 UTC m=+146.487255870" watchObservedRunningTime="2025-10-08 06:36:43.357878721 +0000 UTC m=+146.487571322" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.364909 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wx8p8" event={"ID":"79b44209-418b-49d3-8a0f-cb90d8928373","Type":"ContainerStarted","Data":"29a9abfd120bc73352fa058ce69696bfa1e3cba36d4f915cc51ae4cc746b80d3"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.364967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wx8p8" event={"ID":"79b44209-418b-49d3-8a0f-cb90d8928373","Type":"ContainerStarted","Data":"b987dedacac3210e6240f89a5ba9e4bb269436539a4b32acdc984b3251a18196"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.374888 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" event={"ID":"40a06246-a0bb-4a09-b184-9b1bca1610b2","Type":"ContainerStarted","Data":"bb91258d6a2c8996c5f00e4fd572d2beecd437568a23cd0a88aed902c0317bdb"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.389392 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktklm" podStartSLOduration=126.389376041 podStartE2EDuration="2m6.389376041s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.389007676 +0000 UTC m=+146.518700287" watchObservedRunningTime="2025-10-08 06:36:43.389376041 +0000 UTC m=+146.519068642" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.390113 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" event={"ID":"eae96488-906d-423e-ad79-b4448ef8ad58","Type":"ContainerStarted","Data":"662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.390155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" event={"ID":"eae96488-906d-423e-ad79-b4448ef8ad58","Type":"ContainerStarted","Data":"6164f9e880ecc163e7b010d71c2c9610992830463584f88730ec333ba035bea1"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.390660 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.392210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" event={"ID":"f830b4b0-eaaa-4de4-9c5d-9b6bd24831d7","Type":"ContainerStarted","Data":"6d224533b4cd53b08f29127b6392d4023b57d4e63f3d857a48c851f02bc5a64d"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.392367 4958 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7sb7t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.392593 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" podUID="eae96488-906d-423e-ad79-b4448ef8ad58" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.392818 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.410253 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.416899 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kjkv4" podStartSLOduration=126.416884436 podStartE2EDuration="2m6.416884436s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.415073736 +0000 UTC m=+146.544766337" watchObservedRunningTime="2025-10-08 06:36:43.416884436 +0000 UTC m=+146.546577037" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.419504 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" event={"ID":"582542ee-c2ad-4058-830e-dda091d4507a","Type":"ContainerStarted","Data":"90eb63e8748bfa9abf067c1acd5f752788dd6ab10850ef84fab3cb678b33507f"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.444077 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xsfmh" podStartSLOduration=7.444057288 podStartE2EDuration="7.444057288s" podCreationTimestamp="2025-10-08 06:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.439344276 +0000 UTC m=+146.569036877" watchObservedRunningTime="2025-10-08 06:36:43.444057288 +0000 UTC m=+146.573749889" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.461238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" event={"ID":"ed1e22c8-bc14-4e16-b349-442d188ac881","Type":"ContainerStarted","Data":"44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.465736 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.470157 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:43.970142948 +0000 UTC m=+147.099835549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.473294 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tv62" podStartSLOduration=126.473265119 podStartE2EDuration="2m6.473265119s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.471588984 +0000 UTC m=+146.601281595" watchObservedRunningTime="2025-10-08 06:36:43.473265119 +0000 UTC m=+146.602965230" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.497089 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-86rhc" podStartSLOduration=126.497071511 podStartE2EDuration="2m6.497071511s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.491391571 +0000 UTC m=+146.621084172" watchObservedRunningTime="2025-10-08 06:36:43.497071511 +0000 UTC m=+146.626764132" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.508378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" event={"ID":"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec","Type":"ContainerStarted","Data":"2e5cc6c8d690d086f2589a89002b7e418ccd7858e2ae20683a6e0462610b6901"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.508589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" event={"ID":"5fd0f38b-dcb2-4039-b4b7-bb02593b1aec","Type":"ContainerStarted","Data":"183ab588f482f49110cccb4653cdefb87c35439d16ed0eca6e58b196c9a933b4"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.565263 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" event={"ID":"b49e7739-a4d8-4b6d-9e63-a444acd9d1d9","Type":"ContainerStarted","Data":"be8e2f5da81f391a512b654b2463a8c9e5a88218d57546499b89ed0cb37d7552"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.567079 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.567245 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.067224488 +0000 UTC m=+147.196917089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.568726 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.571135 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.071111628 +0000 UTC m=+147.200804229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.598747 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-swm4r" podStartSLOduration=126.598701846 podStartE2EDuration="2m6.598701846s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.552983956 +0000 UTC m=+146.682676557" watchObservedRunningTime="2025-10-08 06:36:43.598701846 +0000 UTC m=+146.728394447" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.630636 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9dwhm" podStartSLOduration=126.630610852 podStartE2EDuration="2m6.630610852s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.591778988 +0000 UTC m=+146.721471579" watchObservedRunningTime="2025-10-08 06:36:43.630610852 +0000 UTC m=+146.760303453" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.649746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" event={"ID":"86b11ac2-2a27-4859-b9b2-595d2ca7556d","Type":"ContainerStarted","Data":"a673b4e562496da508b65c252464eeb031c8e8a1697e452cbd0ffef29062d9cd"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.649795 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" event={"ID":"86b11ac2-2a27-4859-b9b2-595d2ca7556d","Type":"ContainerStarted","Data":"428dec76ee01f3750c082426339ff34ad40ef5210ccb966bff494cf73604969a"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.653569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" event={"ID":"19fe21c7-7d44-447e-808d-3e929bbbb3a8","Type":"ContainerStarted","Data":"08808d6458def1d712807dc8da06d2cdac2e3440d0b6bb1bb3f2f54f2dd48504"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.668019 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" event={"ID":"e38984df-4074-4d74-9a5e-fada1d1d8b1f","Type":"ContainerStarted","Data":"a6eb7b1721449a0e84999a1dd8df9ad07ee189c394327573aabee20a3d27420b"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.669886 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" podStartSLOduration=126.669876353 podStartE2EDuration="2m6.669876353s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.631766147 +0000 UTC m=+146.761458748" watchObservedRunningTime="2025-10-08 06:36:43.669876353 +0000 UTC m=+146.799568944" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.670219 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" podStartSLOduration=126.670214456 podStartE2EDuration="2m6.670214456s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.669484437 +0000 UTC m=+146.799177038" watchObservedRunningTime="2025-10-08 06:36:43.670214456 +0000 UTC m=+146.799907057" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.670390 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.672197 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.172181382 +0000 UTC m=+147.301873983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.683796 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" event={"ID":"835c2c67-ae1f-49bb-9d70-09bf1ac25885","Type":"ContainerStarted","Data":"361f4ba947cdca51cb0ff313fa781484e2e9d4b3a20c5a9bc03bd73370fc609d"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.683849 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" event={"ID":"835c2c67-ae1f-49bb-9d70-09bf1ac25885","Type":"ContainerStarted","Data":"49f5e68b3b06df8011ad109a9a756d8b9750c7c8566982a4e13bcf7e60af4d94"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.683859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" event={"ID":"835c2c67-ae1f-49bb-9d70-09bf1ac25885","Type":"ContainerStarted","Data":"f5534a01329c2680b39e2b1944db93a23c34860109b271178270b99aeb873826"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.686635 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.693560 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" event={"ID":"9ecaafcc-322f-4167-8757-e6a29dcca078","Type":"ContainerStarted","Data":"178acd1413c44f22a2e83e3b950a742b81d66876c26897af537fa9128fd54b26"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.693595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" event={"ID":"9ecaafcc-322f-4167-8757-e6a29dcca078","Type":"ContainerStarted","Data":"59317faec0ab3ac84d9f1816a2e853b2e24d07861a191eb5fcb04aaed43d81dc"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.722594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-982gr" event={"ID":"c836c0b9-6574-4e15-b137-801770985326","Type":"ContainerStarted","Data":"42671e70ff55d6f064038fb83dff2ebd2fc6e67d1eda0f154e7c6f3074e6d01f"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.723540 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.725844 4958 patch_prober.go:28] interesting pod/console-operator-58897d9998-982gr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.725918 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-982gr" podUID="c836c0b9-6574-4e15-b137-801770985326" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.744936 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" event={"ID":"25a337ec-7967-4941-b3b8-d78ef7e1eab1","Type":"ContainerStarted","Data":"441141628d6fc6016647f1afc61a604a8f132743819def9a30c717614a2edd01"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.745603 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.755060 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:43 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:43 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:43 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.755170 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.755076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" event={"ID":"98aabd8a-9cfc-4977-b42c-990fc3894455","Type":"ContainerStarted","Data":"e86dc2a95e1f7bdb22695d6213c2e58baadeec9fe03a82402d5dd2ff15117752"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.756469 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.762910 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cqh7v" podStartSLOduration=126.762889714 podStartE2EDuration="2m6.762889714s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.718419342 +0000 UTC m=+146.848111943" watchObservedRunningTime="2025-10-08 06:36:43.762889714 +0000 UTC m=+146.892582315" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.763690 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wjnvk" podStartSLOduration=126.763685385 podStartE2EDuration="2m6.763685385s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.762890174 +0000 UTC m=+146.892582775" watchObservedRunningTime="2025-10-08 06:36:43.763685385 +0000 UTC m=+146.893377986" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.769834 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.776145 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.778816 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.27880223 +0000 UTC m=+147.408494831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.784209 4958 generic.go:334] "Generic (PLEG): container finished" podID="552f0de0-1991-4a6c-8a8c-f44fced1f07f" containerID="b85c683a0cdffd0586aa487a392b1697c6d75b488df26dc2a3663f1400d4ad8f" exitCode=0 Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.784350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" event={"ID":"552f0de0-1991-4a6c-8a8c-f44fced1f07f","Type":"ContainerDied","Data":"b85c683a0cdffd0586aa487a392b1697c6d75b488df26dc2a3663f1400d4ad8f"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.803409 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" event={"ID":"c3628713-9b8f-4bba-8e6e-40803fc6f21b","Type":"ContainerStarted","Data":"6970f7f74e5cc3e09d2be70f4b13504467894e2c4de489c071803bfff842d0c3"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.803462 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" event={"ID":"c3628713-9b8f-4bba-8e6e-40803fc6f21b","Type":"ContainerStarted","Data":"8bf25dec687cde8022e4f71d12276928a87bb96f45b022b205057cd3ac9f2a1c"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.808583 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" event={"ID":"5f95d9cb-2319-44b8-bf36-58afbed80602","Type":"ContainerStarted","Data":"e74853128bb8e6e9f6e76ed54ae0aa4506a984cb18f9b7c0ddcf5a9220ef388a"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.808620 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" event={"ID":"5f95d9cb-2319-44b8-bf36-58afbed80602","Type":"ContainerStarted","Data":"4db7ae583e0a237403ad480e9497eff83ef63b4113000eb09ca6ebd47a80cef6"} Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.837179 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" podStartSLOduration=126.83716043 podStartE2EDuration="2m6.83716043s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.835512406 +0000 UTC m=+146.965205007" watchObservedRunningTime="2025-10-08 06:36:43.83716043 +0000 UTC m=+146.966853021" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.852572 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.853138 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.860567 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8scdv" podStartSLOduration=126.860547096 podStartE2EDuration="2m6.860547096s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.859137401 +0000 UTC m=+146.988830002" watchObservedRunningTime="2025-10-08 06:36:43.860547096 +0000 UTC m=+146.990239687" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.877462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.880429 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.380404425 +0000 UTC m=+147.510097026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.892834 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7mwpk" podStartSLOduration=126.892812075 podStartE2EDuration="2m6.892812075s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.892607857 +0000 UTC m=+147.022300448" watchObservedRunningTime="2025-10-08 06:36:43.892812075 +0000 UTC m=+147.022504676" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.951164 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-982gr" podStartSLOduration=126.951144444 podStartE2EDuration="2m6.951144444s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.950447917 +0000 UTC m=+147.080140518" watchObservedRunningTime="2025-10-08 06:36:43.951144444 +0000 UTC m=+147.080837045" Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.980530 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:43 crc kubenswrapper[4958]: E1008 06:36:43.984164 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.484145312 +0000 UTC m=+147.613837913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:43 crc kubenswrapper[4958]: I1008 06:36:43.986019 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" podStartSLOduration=126.986004654 podStartE2EDuration="2m6.986004654s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:43.978111118 +0000 UTC m=+147.107803719" watchObservedRunningTime="2025-10-08 06:36:43.986004654 +0000 UTC m=+147.115697255" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.011232 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" podStartSLOduration=127.01121067 podStartE2EDuration="2m7.01121067s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:44.007413333 +0000 UTC m=+147.137105934" watchObservedRunningTime="2025-10-08 06:36:44.01121067 +0000 UTC m=+147.140903271" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.084276 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.084629 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.584600712 +0000 UTC m=+147.714293313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.084771 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.085375 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w7xcj" podStartSLOduration=127.085357271 podStartE2EDuration="2m7.085357271s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:44.073999111 +0000 UTC m=+147.203691712" watchObservedRunningTime="2025-10-08 06:36:44.085357271 +0000 UTC m=+147.215049872" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.085564 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.585546978 +0000 UTC m=+147.715239579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.086513 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n4pr7" podStartSLOduration=127.086508086 podStartE2EDuration="2m7.086508086s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:44.042214211 +0000 UTC m=+147.171906812" watchObservedRunningTime="2025-10-08 06:36:44.086508086 +0000 UTC m=+147.216200687" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.185905 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.186126 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.686091412 +0000 UTC m=+147.815784013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.186301 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.186651 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.686642763 +0000 UTC m=+147.816335364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.273371 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jrx6t" podStartSLOduration=127.273342651 podStartE2EDuration="2m7.273342651s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:44.169085413 +0000 UTC m=+147.298778014" watchObservedRunningTime="2025-10-08 06:36:44.273342651 +0000 UTC m=+147.403035252" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.298689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.299134 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.798776405 +0000 UTC m=+147.928469006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.299934 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.300574 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.800559034 +0000 UTC m=+147.930251635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.401873 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.402191 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:44.902174029 +0000 UTC m=+148.031866630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.462920 4958 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mhg9b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.463007 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.505386 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.005365275 +0000 UTC m=+148.135057876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.505464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.543577 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ddt6r"] Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.554362 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.554644 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.560891 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.561809 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ddt6r"] Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.608413 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.608681 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.108668065 +0000 UTC m=+148.238360666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.709594 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-utilities\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.709651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-catalog-content\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.709698 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6sw\" (UniqueName: \"kubernetes.io/projected/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-kube-api-access-ds6sw\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.709754 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.710106 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.210090573 +0000 UTC m=+148.339783174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.735471 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vx29t"] Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.736668 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.738517 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.749527 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:44 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:44 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:44 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.749581 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.752014 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx29t"] Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.810438 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.810733 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.310715489 +0000 UTC m=+148.440408080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.810778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-catalog-content\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.810821 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6sw\" (UniqueName: \"kubernetes.io/projected/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-kube-api-access-ds6sw\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.810852 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.810934 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-utilities\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.811295 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.311279771 +0000 UTC m=+148.440972372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.811591 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-catalog-content\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.811622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-utilities\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.816323 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" event={"ID":"40a06246-a0bb-4a09-b184-9b1bca1610b2","Type":"ContainerStarted","Data":"231869652ae1ab0cb3170fb29ccb8c15c549fe86d4a909265686eea61f0b0a7a"} Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.818377 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" event={"ID":"552f0de0-1991-4a6c-8a8c-f44fced1f07f","Type":"ContainerStarted","Data":"e106411ac5f537e6972654636c81161db3f29d1645a62c873c525d80f8125b98"} Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.818415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" event={"ID":"552f0de0-1991-4a6c-8a8c-f44fced1f07f","Type":"ContainerStarted","Data":"1824ce6da63376630084b46c69a36dcb36fb6ad8c6d2a624e48241cc0ae98c82"} Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.820229 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wx8p8" event={"ID":"79b44209-418b-49d3-8a0f-cb90d8928373","Type":"ContainerStarted","Data":"1664bcff982ee075a037d0f6f5b53b66b276fdb9ccf07ffac1aeb720c9467272"} Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.821692 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.830028 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.832270 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.832577 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-plwft" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.838401 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6sw\" (UniqueName: \"kubernetes.io/projected/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-kube-api-access-ds6sw\") pod \"certified-operators-ddt6r\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.850434 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-982gr" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.851565 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" podStartSLOduration=128.85154404 podStartE2EDuration="2m8.85154404s" podCreationTimestamp="2025-10-08 06:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:44.848236202 +0000 UTC m=+147.977928803" watchObservedRunningTime="2025-10-08 06:36:44.85154404 +0000 UTC m=+147.981236641" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.874120 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.919543 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.920388 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-utilities\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.920500 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrtdm\" (UniqueName: \"kubernetes.io/projected/7f1064e4-9481-4a97-a05d-f11d13be3e77-kube-api-access-xrtdm\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.920551 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-catalog-content\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:44 crc kubenswrapper[4958]: E1008 06:36:44.921759 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.421743688 +0000 UTC m=+148.551436289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.941006 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tkq5d"] Oct 08 06:36:44 crc kubenswrapper[4958]: I1008 06:36:44.944553 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.027930 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkq5d"] Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-utilities\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028656 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-utilities\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrtdm\" (UniqueName: \"kubernetes.io/projected/7f1064e4-9481-4a97-a05d-f11d13be3e77-kube-api-access-xrtdm\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkwlz\" (UniqueName: \"kubernetes.io/projected/e35d1ee6-16a1-4017-991c-17ebda75d0a0-kube-api-access-jkwlz\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028750 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-catalog-content\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-catalog-content\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.028939 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.029387 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.529368426 +0000 UTC m=+148.659061027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.030168 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-utilities\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.030447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-catalog-content\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.040063 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wx8p8" podStartSLOduration=9.040027509 podStartE2EDuration="9.040027509s" podCreationTimestamp="2025-10-08 06:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:45.004012184 +0000 UTC m=+148.133704785" watchObservedRunningTime="2025-10-08 06:36:45.040027509 +0000 UTC m=+148.169720200" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.064337 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrtdm\" (UniqueName: \"kubernetes.io/projected/7f1064e4-9481-4a97-a05d-f11d13be3e77-kube-api-access-xrtdm\") pod \"community-operators-vx29t\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.130763 4958 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.130773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.130875 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.630850926 +0000 UTC m=+148.760543527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.131504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-utilities\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.131542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkwlz\" (UniqueName: \"kubernetes.io/projected/e35d1ee6-16a1-4017-991c-17ebda75d0a0-kube-api-access-jkwlz\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.131613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-catalog-content\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.131795 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.132231 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.632216899 +0000 UTC m=+148.761909500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.132768 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-catalog-content\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.133290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-utilities\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.145794 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n2k7x"] Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.152031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.158487 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2k7x"] Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.170145 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkwlz\" (UniqueName: \"kubernetes.io/projected/e35d1ee6-16a1-4017-991c-17ebda75d0a0-kube-api-access-jkwlz\") pod \"certified-operators-tkq5d\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.234523 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.234597 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.734558492 +0000 UTC m=+148.864251093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.235250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.235914 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.735896493 +0000 UTC m=+148.865589094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.321119 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.338594 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.338770 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-catalog-content\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.338822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cffvz\" (UniqueName: \"kubernetes.io/projected/246ee01a-d75a-4d2d-8eaf-b63787dffff7-kube-api-access-cffvz\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.338856 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-utilities\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.339805 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.839786996 +0000 UTC m=+148.969479597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.350720 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.442679 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cffvz\" (UniqueName: \"kubernetes.io/projected/246ee01a-d75a-4d2d-8eaf-b63787dffff7-kube-api-access-cffvz\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-utilities\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-utilities\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443447 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-catalog-content\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.443647 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.445320 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:45.945303152 +0000 UTC m=+149.074995753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.446531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-catalog-content\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.449512 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.449776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.451342 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.454088 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.460653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cffvz\" (UniqueName: \"kubernetes.io/projected/246ee01a-d75a-4d2d-8eaf-b63787dffff7-kube-api-access-cffvz\") pod \"community-operators-n2k7x\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.484095 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.544582 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.544972 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:46.04493538 +0000 UTC m=+149.174627981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.596957 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ddt6r"] Oct 08 06:36:45 crc kubenswrapper[4958]: W1008 06:36:45.606637 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19ad4cdd_0fb7_4aab_9986_6bbcfddc1205.slice/crio-49cb2652778e741143e3c77946666ca4fdf1b9e0774ee4a2148d2e3596062682 WatchSource:0}: Error finding container 49cb2652778e741143e3c77946666ca4fdf1b9e0774ee4a2148d2e3596062682: Status 404 returned error can't find the container with id 49cb2652778e741143e3c77946666ca4fdf1b9e0774ee4a2148d2e3596062682 Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.607063 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.645684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.646327 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:46.146311216 +0000 UTC m=+149.276003817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.659622 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.670888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.680271 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tkq5d"] Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.746833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.747086 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-08 06:36:46.247062327 +0000 UTC m=+149.376754928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.747123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: E1008 06:36:45.747452 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-08 06:36:46.247440732 +0000 UTC m=+149.377133333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7chhl" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.749634 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:45 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:45 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:45 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.749668 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.766390 4958 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-08T06:36:45.131080835Z","Handler":null,"Name":""} Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.787186 4958 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.787229 4958 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.828317 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx29t"] Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.828876 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkq5d" event={"ID":"e35d1ee6-16a1-4017-991c-17ebda75d0a0","Type":"ContainerStarted","Data":"ee09a496636751db19c64d684d1d4388365a84a3b507032a14246966b4baa5e4"} Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.830300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerStarted","Data":"9e501bca235983bf779846afb7a58458404da499d4490a9fe23631e000724545"} Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.830316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerStarted","Data":"49cb2652778e741143e3c77946666ca4fdf1b9e0774ee4a2148d2e3596062682"} Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.836756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" event={"ID":"40a06246-a0bb-4a09-b184-9b1bca1610b2","Type":"ContainerStarted","Data":"383703ad46a52cde14f631724533ffcd784d6a6e28242700dbc5359a65a732ab"} Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.836780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" event={"ID":"40a06246-a0bb-4a09-b184-9b1bca1610b2","Type":"ContainerStarted","Data":"d8a014b107e65f57325b74a538b9ddbc66f8af093929b72ec17d17059ece7fbf"} Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.848393 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.851474 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.862910 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hr4l5" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.895632 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5kqn9" podStartSLOduration=9.89561652 podStartE2EDuration="9.89561652s" podCreationTimestamp="2025-10-08 06:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:45.861896144 +0000 UTC m=+148.991588745" watchObservedRunningTime="2025-10-08 06:36:45.89561652 +0000 UTC m=+149.025309121" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.949627 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.974979 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.975018 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:45 crc kubenswrapper[4958]: I1008 06:36:45.992626 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2k7x"] Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.162126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7chhl\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.212656 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:46 crc kubenswrapper[4958]: W1008 06:36:46.485133 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-067c3e4174f929878a50d21f10bb04a6e09bc6749b788c693ec3f60eed1d3367 WatchSource:0}: Error finding container 067c3e4174f929878a50d21f10bb04a6e09bc6749b788c693ec3f60eed1d3367: Status 404 returned error can't find the container with id 067c3e4174f929878a50d21f10bb04a6e09bc6749b788c693ec3f60eed1d3367 Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.535401 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bpkln"] Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.536390 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.538992 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.545365 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpkln"] Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.606260 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7chhl"] Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.663589 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmtxd\" (UniqueName: \"kubernetes.io/projected/95db505d-76df-4691-ba83-04a1ae4381cb-kube-api-access-hmtxd\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.663644 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-catalog-content\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.663669 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-utilities\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.747642 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:46 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:46 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:46 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.747716 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.764900 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmtxd\" (UniqueName: \"kubernetes.io/projected/95db505d-76df-4691-ba83-04a1ae4381cb-kube-api-access-hmtxd\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.764969 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-catalog-content\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.764998 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-utilities\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.765821 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-utilities\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.765870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-catalog-content\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.789126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmtxd\" (UniqueName: \"kubernetes.io/projected/95db505d-76df-4691-ba83-04a1ae4381cb-kube-api-access-hmtxd\") pod \"redhat-marketplace-bpkln\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.841787 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cd1779c9749bf8f77c58a6857ae149618af3915503e91fd7e527f6c15b0aa2f3"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.841837 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"067c3e4174f929878a50d21f10bb04a6e09bc6749b788c693ec3f60eed1d3367"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.841995 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.843455 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5438e1d4a76943d67ce1dad1c6c13a4f0040e455ed26b95799794beda2ca4522"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.843505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4b6664c3fa25e175fbd61f03e6cd586ca0db3190b3ccfe27a694ba7a30400764"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.844738 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2396908e5e616807dd94611bfd3eb3f0e8ec7e04733a79b7c607ab01aa4384fe"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.844775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5de52db2ed33cb84d095f1b78e6c6c4e7a47a34b2d1e7e8acb69d02ccda352d0"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.846115 4958 generic.go:334] "Generic (PLEG): container finished" podID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerID="334b03495f3c4d36b6a27044922314117c3f3598dbd41b2da0e2d78265229f93" exitCode=0 Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.846186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2k7x" event={"ID":"246ee01a-d75a-4d2d-8eaf-b63787dffff7","Type":"ContainerDied","Data":"334b03495f3c4d36b6a27044922314117c3f3598dbd41b2da0e2d78265229f93"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.846204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2k7x" event={"ID":"246ee01a-d75a-4d2d-8eaf-b63787dffff7","Type":"ContainerStarted","Data":"a22a87fd7d89702ee4cd981ab9415af14a5b5404ff5b981ca1d3b6d34d0ffc77"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.847575 4958 generic.go:334] "Generic (PLEG): container finished" podID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerID="1a0a490057b9a2adf0c9d797326233eae36c4fe1764135be0277322987d43b43" exitCode=0 Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.847611 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkq5d" event={"ID":"e35d1ee6-16a1-4017-991c-17ebda75d0a0","Type":"ContainerDied","Data":"1a0a490057b9a2adf0c9d797326233eae36c4fe1764135be0277322987d43b43"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.848372 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.849168 4958 generic.go:334] "Generic (PLEG): container finished" podID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerID="9e501bca235983bf779846afb7a58458404da499d4490a9fe23631e000724545" exitCode=0 Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.849217 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerDied","Data":"9e501bca235983bf779846afb7a58458404da499d4490a9fe23631e000724545"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.851709 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerID="60b7e2c1bb026cb1eccca639600a89d2fd163d53c1433950f3be337fb009255f" exitCode=0 Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.851752 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx29t" event={"ID":"7f1064e4-9481-4a97-a05d-f11d13be3e77","Type":"ContainerDied","Data":"60b7e2c1bb026cb1eccca639600a89d2fd163d53c1433950f3be337fb009255f"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.851767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx29t" event={"ID":"7f1064e4-9481-4a97-a05d-f11d13be3e77","Type":"ContainerStarted","Data":"54416aee36d089f2a282402f80a4a58d2b5bce6b6cc1c640b159980f350eb6a1"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.853500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" event={"ID":"d8a63239-acb7-4fc9-8f62-ffa55f261901","Type":"ContainerStarted","Data":"49e85f150d81709f8671250d6585125f4bbb713c63c7f2d2312cd1f50af81a55"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.853524 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" event={"ID":"d8a63239-acb7-4fc9-8f62-ffa55f261901","Type":"ContainerStarted","Data":"561a232fd2e94cc2a8c994f5bef0af39ebd708e49d09b43f54b37f418e168c87"} Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.872500 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.922060 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lsh6r"] Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.923000 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:46 crc kubenswrapper[4958]: I1008 06:36:46.966442 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsh6r"] Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.035247 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" podStartSLOduration=130.035230568 podStartE2EDuration="2m10.035230568s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:36:47.034430477 +0000 UTC m=+150.164123078" watchObservedRunningTime="2025-10-08 06:36:47.035230568 +0000 UTC m=+150.164923169" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.082089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xj8t\" (UniqueName: \"kubernetes.io/projected/cc2f2f8d-5af1-4fde-b658-851131f8cd40-kube-api-access-7xj8t\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.082168 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-utilities\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.082214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-catalog-content\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.135632 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.136242 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.139077 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.139237 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.147412 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.183138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xj8t\" (UniqueName: \"kubernetes.io/projected/cc2f2f8d-5af1-4fde-b658-851131f8cd40-kube-api-access-7xj8t\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.183215 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-utilities\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.183266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-catalog-content\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.183711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-catalog-content\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.183907 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-utilities\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.193890 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpkln"] Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.202423 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xj8t\" (UniqueName: \"kubernetes.io/projected/cc2f2f8d-5af1-4fde-b658-851131f8cd40-kube-api-access-7xj8t\") pod \"redhat-marketplace-lsh6r\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.241971 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.285464 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.285544 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.386708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.386798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.387044 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.425466 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.473174 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.485252 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsh6r"] Oct 08 06:36:47 crc kubenswrapper[4958]: W1008 06:36:47.513553 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc2f2f8d_5af1_4fde_b658_851131f8cd40.slice/crio-2c02b5db684971466fd2fdca02338a9ece6787181efeec55bc72d2883fb297bc WatchSource:0}: Error finding container 2c02b5db684971466fd2fdca02338a9ece6787181efeec55bc72d2883fb297bc: Status 404 returned error can't find the container with id 2c02b5db684971466fd2fdca02338a9ece6787181efeec55bc72d2883fb297bc Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.589649 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.694130 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.746017 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:47 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:47 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:47 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.746072 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.860202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d9f6c8b4-d062-4518-823e-92ffb28dcfcf","Type":"ContainerStarted","Data":"e60c7905813ab44dd026368a12c6df29dd3d94be38d13adf0b1174001a0e828d"} Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.864327 4958 generic.go:334] "Generic (PLEG): container finished" podID="95db505d-76df-4691-ba83-04a1ae4381cb" containerID="f490e7e094962f1df9ba3e58c450b8a96c4e724bee2a4ce9e2369b5d2c049a4a" exitCode=0 Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.864427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpkln" event={"ID":"95db505d-76df-4691-ba83-04a1ae4381cb","Type":"ContainerDied","Data":"f490e7e094962f1df9ba3e58c450b8a96c4e724bee2a4ce9e2369b5d2c049a4a"} Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.864447 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpkln" event={"ID":"95db505d-76df-4691-ba83-04a1ae4381cb","Type":"ContainerStarted","Data":"565bb40e83a7546a28d29798fac5b2c59e1c5026756d807b06f0e8fd039ad125"} Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.868308 4958 generic.go:334] "Generic (PLEG): container finished" podID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerID="e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3" exitCode=0 Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.868367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsh6r" event={"ID":"cc2f2f8d-5af1-4fde-b658-851131f8cd40","Type":"ContainerDied","Data":"e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3"} Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.868954 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.868974 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsh6r" event={"ID":"cc2f2f8d-5af1-4fde-b658-851131f8cd40","Type":"ContainerStarted","Data":"2c02b5db684971466fd2fdca02338a9ece6787181efeec55bc72d2883fb297bc"} Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.924811 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4vjhq"] Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.929826 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.933454 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 08 06:36:47 crc kubenswrapper[4958]: I1008 06:36:47.935783 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vjhq"] Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.097840 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-catalog-content\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.097930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-utilities\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.097982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr698\" (UniqueName: \"kubernetes.io/projected/c4403715-42d1-481f-b403-5f04b1772cd5-kube-api-access-vr698\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.199553 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-utilities\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.199895 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr698\" (UniqueName: \"kubernetes.io/projected/c4403715-42d1-481f-b403-5f04b1772cd5-kube-api-access-vr698\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.199935 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-catalog-content\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.201358 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-utilities\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.201923 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-catalog-content\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.227656 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr698\" (UniqueName: \"kubernetes.io/projected/c4403715-42d1-481f-b403-5f04b1772cd5-kube-api-access-vr698\") pod \"redhat-operators-4vjhq\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.252925 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.325233 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j7c47"] Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.328388 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.328576 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j7c47"] Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.505594 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-catalog-content\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.505750 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-utilities\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.505801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhzx\" (UniqueName: \"kubernetes.io/projected/1dbd2bb0-6520-4322-b92e-22dae7a66022-kube-api-access-lvhzx\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.593624 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vjhq"] Oct 08 06:36:48 crc kubenswrapper[4958]: W1008 06:36:48.599738 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4403715_42d1_481f_b403_5f04b1772cd5.slice/crio-758d6bdf2a655bd15ea3f20905d792f16ea0f6cd4966f0b7048730d2056652db WatchSource:0}: Error finding container 758d6bdf2a655bd15ea3f20905d792f16ea0f6cd4966f0b7048730d2056652db: Status 404 returned error can't find the container with id 758d6bdf2a655bd15ea3f20905d792f16ea0f6cd4966f0b7048730d2056652db Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.607138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-catalog-content\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.607201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-utilities\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.607236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhzx\" (UniqueName: \"kubernetes.io/projected/1dbd2bb0-6520-4322-b92e-22dae7a66022-kube-api-access-lvhzx\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.607828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-catalog-content\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.608052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-utilities\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.640759 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhzx\" (UniqueName: \"kubernetes.io/projected/1dbd2bb0-6520-4322-b92e-22dae7a66022-kube-api-access-lvhzx\") pod \"redhat-operators-j7c47\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.658828 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:36:48 crc kubenswrapper[4958]: E1008 06:36:48.733532 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd9f6c8b4_d062_4518_823e_92ffb28dcfcf.slice/crio-a03d83a4b1e2dd5fb64d998ddd194f63663451d75ceaadd122344d2afd0e63ad.scope\": RecentStats: unable to find data in memory cache]" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.746332 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:48 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:48 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:48 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.746411 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.880883 4958 generic.go:334] "Generic (PLEG): container finished" podID="c4403715-42d1-481f-b403-5f04b1772cd5" containerID="137115a99a86e8668e523759e4306a7723c3bd0116ad8f42aba16515c10159d8" exitCode=0 Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.881062 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vjhq" event={"ID":"c4403715-42d1-481f-b403-5f04b1772cd5","Type":"ContainerDied","Data":"137115a99a86e8668e523759e4306a7723c3bd0116ad8f42aba16515c10159d8"} Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.881339 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vjhq" event={"ID":"c4403715-42d1-481f-b403-5f04b1772cd5","Type":"ContainerStarted","Data":"758d6bdf2a655bd15ea3f20905d792f16ea0f6cd4966f0b7048730d2056652db"} Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.889302 4958 generic.go:334] "Generic (PLEG): container finished" podID="d9f6c8b4-d062-4518-823e-92ffb28dcfcf" containerID="a03d83a4b1e2dd5fb64d998ddd194f63663451d75ceaadd122344d2afd0e63ad" exitCode=0 Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.889349 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d9f6c8b4-d062-4518-823e-92ffb28dcfcf","Type":"ContainerDied","Data":"a03d83a4b1e2dd5fb64d998ddd194f63663451d75ceaadd122344d2afd0e63ad"} Oct 08 06:36:48 crc kubenswrapper[4958]: I1008 06:36:48.940769 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j7c47"] Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.065674 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-d22g5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.065732 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-d22g5" podUID="0381af03-9a83-4a43-ab4f-79cbd5d3351f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.065766 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-d22g5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.065828 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d22g5" podUID="0381af03-9a83-4a43-ab4f-79cbd5d3351f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.34:8080/\": dial tcp 10.217.0.34:8080: connect: connection refused" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.154231 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.652735 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.653070 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.660111 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.716711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.716845 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.718880 4958 patch_prober.go:28] interesting pod/console-f9d7485db-86rhc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.718927 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-86rhc" podUID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.743482 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.747993 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:49 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Oct 08 06:36:49 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:49 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.748089 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.907360 4958 generic.go:334] "Generic (PLEG): container finished" podID="86b11ac2-2a27-4859-b9b2-595d2ca7556d" containerID="a673b4e562496da508b65c252464eeb031c8e8a1697e452cbd0ffef29062d9cd" exitCode=0 Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.907464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" event={"ID":"86b11ac2-2a27-4859-b9b2-595d2ca7556d","Type":"ContainerDied","Data":"a673b4e562496da508b65c252464eeb031c8e8a1697e452cbd0ffef29062d9cd"} Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.912663 4958 generic.go:334] "Generic (PLEG): container finished" podID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerID="bc6c05b209232b280635049dc2815b73d95806f065187d72df67ba446b3dd8a8" exitCode=0 Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.914569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7c47" event={"ID":"1dbd2bb0-6520-4322-b92e-22dae7a66022","Type":"ContainerDied","Data":"bc6c05b209232b280635049dc2815b73d95806f065187d72df67ba446b3dd8a8"} Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.914601 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7c47" event={"ID":"1dbd2bb0-6520-4322-b92e-22dae7a66022","Type":"ContainerStarted","Data":"8dd4e4c3d5890b6af68119877809313e69deb56bfdb29908969c50541634a1bf"} Oct 08 06:36:49 crc kubenswrapper[4958]: I1008 06:36:49.921576 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qk24s" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.259042 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.444557 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kube-api-access\") pod \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.444606 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kubelet-dir\") pod \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\" (UID: \"d9f6c8b4-d062-4518-823e-92ffb28dcfcf\") " Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.444754 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d9f6c8b4-d062-4518-823e-92ffb28dcfcf" (UID: "d9f6c8b4-d062-4518-823e-92ffb28dcfcf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.444870 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.451795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d9f6c8b4-d062-4518-823e-92ffb28dcfcf" (UID: "d9f6c8b4-d062-4518-823e-92ffb28dcfcf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.546007 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9f6c8b4-d062-4518-823e-92ffb28dcfcf-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.745840 4958 patch_prober.go:28] interesting pod/router-default-5444994796-fkwgq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 08 06:36:50 crc kubenswrapper[4958]: [+]has-synced ok Oct 08 06:36:50 crc kubenswrapper[4958]: [+]process-running ok Oct 08 06:36:50 crc kubenswrapper[4958]: healthz check failed Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.746390 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fkwgq" podUID="61823234-969c-46ce-9c8c-7eb8f41867dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.924784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d9f6c8b4-d062-4518-823e-92ffb28dcfcf","Type":"ContainerDied","Data":"e60c7905813ab44dd026368a12c6df29dd3d94be38d13adf0b1174001a0e828d"} Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.924996 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 08 06:36:50 crc kubenswrapper[4958]: I1008 06:36:50.925377 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e60c7905813ab44dd026368a12c6df29dd3d94be38d13adf0b1174001a0e828d" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.368754 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.463457 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqnjn\" (UniqueName: \"kubernetes.io/projected/86b11ac2-2a27-4859-b9b2-595d2ca7556d-kube-api-access-lqnjn\") pod \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.463536 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86b11ac2-2a27-4859-b9b2-595d2ca7556d-config-volume\") pod \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.463565 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86b11ac2-2a27-4859-b9b2-595d2ca7556d-secret-volume\") pod \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\" (UID: \"86b11ac2-2a27-4859-b9b2-595d2ca7556d\") " Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.464234 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b11ac2-2a27-4859-b9b2-595d2ca7556d-config-volume" (OuterVolumeSpecName: "config-volume") pod "86b11ac2-2a27-4859-b9b2-595d2ca7556d" (UID: "86b11ac2-2a27-4859-b9b2-595d2ca7556d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.468234 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b11ac2-2a27-4859-b9b2-595d2ca7556d-kube-api-access-lqnjn" (OuterVolumeSpecName: "kube-api-access-lqnjn") pod "86b11ac2-2a27-4859-b9b2-595d2ca7556d" (UID: "86b11ac2-2a27-4859-b9b2-595d2ca7556d"). InnerVolumeSpecName "kube-api-access-lqnjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.468588 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b11ac2-2a27-4859-b9b2-595d2ca7556d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86b11ac2-2a27-4859-b9b2-595d2ca7556d" (UID: "86b11ac2-2a27-4859-b9b2-595d2ca7556d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.564523 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqnjn\" (UniqueName: \"kubernetes.io/projected/86b11ac2-2a27-4859-b9b2-595d2ca7556d-kube-api-access-lqnjn\") on node \"crc\" DevicePath \"\"" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.564550 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86b11ac2-2a27-4859-b9b2-595d2ca7556d-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.564561 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86b11ac2-2a27-4859-b9b2-595d2ca7556d-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.745786 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.747785 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fkwgq" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.823627 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 06:36:51 crc kubenswrapper[4958]: E1008 06:36:51.824241 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b11ac2-2a27-4859-b9b2-595d2ca7556d" containerName="collect-profiles" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.824255 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b11ac2-2a27-4859-b9b2-595d2ca7556d" containerName="collect-profiles" Oct 08 06:36:51 crc kubenswrapper[4958]: E1008 06:36:51.824267 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f6c8b4-d062-4518-823e-92ffb28dcfcf" containerName="pruner" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.824273 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f6c8b4-d062-4518-823e-92ffb28dcfcf" containerName="pruner" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.824372 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b11ac2-2a27-4859-b9b2-595d2ca7556d" containerName="collect-profiles" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.824383 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f6c8b4-d062-4518-823e-92ffb28dcfcf" containerName="pruner" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.824737 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.826821 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.827048 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.827070 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.933796 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.934173 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww" event={"ID":"86b11ac2-2a27-4859-b9b2-595d2ca7556d","Type":"ContainerDied","Data":"428dec76ee01f3750c082426339ff34ad40ef5210ccb966bff494cf73604969a"} Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.934191 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428dec76ee01f3750c082426339ff34ad40ef5210ccb966bff494cf73604969a" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.970591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1995579b-a766-4328-93f2-ad982ec7d4fe-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:51 crc kubenswrapper[4958]: I1008 06:36:51.970694 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1995579b-a766-4328-93f2-ad982ec7d4fe-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:52 crc kubenswrapper[4958]: I1008 06:36:52.071719 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1995579b-a766-4328-93f2-ad982ec7d4fe-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:52 crc kubenswrapper[4958]: I1008 06:36:52.071799 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1995579b-a766-4328-93f2-ad982ec7d4fe-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:52 crc kubenswrapper[4958]: I1008 06:36:52.075463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1995579b-a766-4328-93f2-ad982ec7d4fe-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:52 crc kubenswrapper[4958]: I1008 06:36:52.102081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1995579b-a766-4328-93f2-ad982ec7d4fe-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:52 crc kubenswrapper[4958]: I1008 06:36:52.150306 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:36:55 crc kubenswrapper[4958]: I1008 06:36:55.135332 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wx8p8" Oct 08 06:36:59 crc kubenswrapper[4958]: I1008 06:36:59.074042 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-d22g5" Oct 08 06:36:59 crc kubenswrapper[4958]: I1008 06:36:59.570588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:59 crc kubenswrapper[4958]: I1008 06:36:59.592433 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3776a5a1-bd0d-42af-9226-7251ee6b8788-metrics-certs\") pod \"network-metrics-daemon-xbfbp\" (UID: \"3776a5a1-bd0d-42af-9226-7251ee6b8788\") " pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:36:59 crc kubenswrapper[4958]: I1008 06:36:59.724536 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:59 crc kubenswrapper[4958]: I1008 06:36:59.729698 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:36:59 crc kubenswrapper[4958]: I1008 06:36:59.787827 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbfbp" Oct 08 06:37:06 crc kubenswrapper[4958]: I1008 06:37:06.220259 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:37:06 crc kubenswrapper[4958]: I1008 06:37:06.845520 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:37:06 crc kubenswrapper[4958]: I1008 06:37:06.845600 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:37:12 crc kubenswrapper[4958]: I1008 06:37:12.433540 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 08 06:37:12 crc kubenswrapper[4958]: I1008 06:37:12.531925 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xbfbp"] Oct 08 06:37:12 crc kubenswrapper[4958]: W1008 06:37:12.615093 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3776a5a1_bd0d_42af_9226_7251ee6b8788.slice/crio-60dfe64d71ee3654d5041f7fb4f4011ca0affcd947a029ab53b4c3b94aa9f5fd WatchSource:0}: Error finding container 60dfe64d71ee3654d5041f7fb4f4011ca0affcd947a029ab53b4c3b94aa9f5fd: Status 404 returned error can't find the container with id 60dfe64d71ee3654d5041f7fb4f4011ca0affcd947a029ab53b4c3b94aa9f5fd Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.072998 4958 generic.go:334] "Generic (PLEG): container finished" podID="95db505d-76df-4691-ba83-04a1ae4381cb" containerID="ea7547cb3874432f164deb6618fc2d1c042e8ba2f08596c62b02fcfec0747c9b" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.073126 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpkln" event={"ID":"95db505d-76df-4691-ba83-04a1ae4381cb","Type":"ContainerDied","Data":"ea7547cb3874432f164deb6618fc2d1c042e8ba2f08596c62b02fcfec0747c9b"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.082919 4958 generic.go:334] "Generic (PLEG): container finished" podID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerID="595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.083779 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsh6r" event={"ID":"cc2f2f8d-5af1-4fde-b658-851131f8cd40","Type":"ContainerDied","Data":"595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.086429 4958 generic.go:334] "Generic (PLEG): container finished" podID="c4403715-42d1-481f-b403-5f04b1772cd5" containerID="cc57746aad3cc841e827184b41e26bfb0420a0c193fcf0b7b1f58dbcd9c77968" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.086506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vjhq" event={"ID":"c4403715-42d1-481f-b403-5f04b1772cd5","Type":"ContainerDied","Data":"cc57746aad3cc841e827184b41e26bfb0420a0c193fcf0b7b1f58dbcd9c77968"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.088470 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" event={"ID":"3776a5a1-bd0d-42af-9226-7251ee6b8788","Type":"ContainerStarted","Data":"60dfe64d71ee3654d5041f7fb4f4011ca0affcd947a029ab53b4c3b94aa9f5fd"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.091313 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerID="61242440ad9fc300f362c49011a48760bd0e7162d18df9816f70f92cb3606376" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.091551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx29t" event={"ID":"7f1064e4-9481-4a97-a05d-f11d13be3e77","Type":"ContainerDied","Data":"61242440ad9fc300f362c49011a48760bd0e7162d18df9816f70f92cb3606376"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.095630 4958 generic.go:334] "Generic (PLEG): container finished" podID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerID="e4891c9b50ce76ee4590c2b5b8593b4d9832cc6627decfa8478a6d4182ada315" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.096030 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerDied","Data":"e4891c9b50ce76ee4590c2b5b8593b4d9832cc6627decfa8478a6d4182ada315"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.102298 4958 generic.go:334] "Generic (PLEG): container finished" podID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerID="38526ffa0b82c72ae23247c86025a1ec1d743568c1523a937325aa3acc655a01" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.102426 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2k7x" event={"ID":"246ee01a-d75a-4d2d-8eaf-b63787dffff7","Type":"ContainerDied","Data":"38526ffa0b82c72ae23247c86025a1ec1d743568c1523a937325aa3acc655a01"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.112707 4958 generic.go:334] "Generic (PLEG): container finished" podID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerID="a151dc86ab74d4569aa9f704a062b9d75820861744c6da31cdaa21fd5a7e7f52" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.112835 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7c47" event={"ID":"1dbd2bb0-6520-4322-b92e-22dae7a66022","Type":"ContainerDied","Data":"a151dc86ab74d4569aa9f704a062b9d75820861744c6da31cdaa21fd5a7e7f52"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.120028 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1995579b-a766-4328-93f2-ad982ec7d4fe","Type":"ContainerStarted","Data":"f869202ca97a21941996442f386b73a7571d2901c870735d496740c82bac2500"} Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.126513 4958 generic.go:334] "Generic (PLEG): container finished" podID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerID="2687dec5a9761f6f550f7470b7db66c8f2d5cc5c64b3160c9a4d9e57982be822" exitCode=0 Oct 08 06:37:13 crc kubenswrapper[4958]: I1008 06:37:13.126561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkq5d" event={"ID":"e35d1ee6-16a1-4017-991c-17ebda75d0a0","Type":"ContainerDied","Data":"2687dec5a9761f6f550f7470b7db66c8f2d5cc5c64b3160c9a4d9e57982be822"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.133123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2k7x" event={"ID":"246ee01a-d75a-4d2d-8eaf-b63787dffff7","Type":"ContainerStarted","Data":"36916b5d5b11498c26307f139ecb8c6dc6fef1c5e135c5b39eedf2407b9b97a9"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.134850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" event={"ID":"3776a5a1-bd0d-42af-9226-7251ee6b8788","Type":"ContainerStarted","Data":"7eae57e70816148744a1e50cc4a88380f1c56bcfc3dd037b411d204343e45980"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.135211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbfbp" event={"ID":"3776a5a1-bd0d-42af-9226-7251ee6b8788","Type":"ContainerStarted","Data":"4920acc2f2ebe145f512b0a5bfcbfe3cb8e20bca7f3ccb876c4e5edc41437d03"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.137399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7c47" event={"ID":"1dbd2bb0-6520-4322-b92e-22dae7a66022","Type":"ContainerStarted","Data":"d0e1f2b511b74718bbf0fc8a75a6f7ebe475d7f82a2a992e84483714a71384b1"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.139373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkq5d" event={"ID":"e35d1ee6-16a1-4017-991c-17ebda75d0a0","Type":"ContainerStarted","Data":"03bf6e7116d4279690832b27da1d77a3600c00a8b607cf36831d0381027ae1db"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.141314 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx29t" event={"ID":"7f1064e4-9481-4a97-a05d-f11d13be3e77","Type":"ContainerStarted","Data":"fc459fb2ea0a16c674c880a8a23ce370c12c74816b9ef9c3346807ea249034cd"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.142570 4958 generic.go:334] "Generic (PLEG): container finished" podID="1995579b-a766-4328-93f2-ad982ec7d4fe" containerID="53f5c3fa5446923f489d4da7faf880766bf814c0c6935257e26d1caca656d55b" exitCode=0 Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.142630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1995579b-a766-4328-93f2-ad982ec7d4fe","Type":"ContainerDied","Data":"53f5c3fa5446923f489d4da7faf880766bf814c0c6935257e26d1caca656d55b"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.144727 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpkln" event={"ID":"95db505d-76df-4691-ba83-04a1ae4381cb","Type":"ContainerStarted","Data":"5711c3e44a0fe732ac43c60d132d7c6d9fe7e9d3fcaa9ed807e53bea1eb1fa4b"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.146696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsh6r" event={"ID":"cc2f2f8d-5af1-4fde-b658-851131f8cd40","Type":"ContainerStarted","Data":"386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.148912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vjhq" event={"ID":"c4403715-42d1-481f-b403-5f04b1772cd5","Type":"ContainerStarted","Data":"0b9fa21be7c67daa729e86249a38ed55cd75ddff7c98d5a617dac92231560d49"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.152115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerStarted","Data":"a58a2705acbdd5a830714db4cac790aff0406c7aa13d81f1e552a66e43fbf55d"} Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.161858 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n2k7x" podStartSLOduration=2.456219171 podStartE2EDuration="29.161838257s" podCreationTimestamp="2025-10-08 06:36:45 +0000 UTC" firstStartedPulling="2025-10-08 06:36:46.848128363 +0000 UTC m=+149.977820964" lastFinishedPulling="2025-10-08 06:37:13.553747439 +0000 UTC m=+176.683440050" observedRunningTime="2025-10-08 06:37:14.157134803 +0000 UTC m=+177.286827424" watchObservedRunningTime="2025-10-08 06:37:14.161838257 +0000 UTC m=+177.291530858" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.176770 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xbfbp" podStartSLOduration=157.176755753 podStartE2EDuration="2m37.176755753s" podCreationTimestamp="2025-10-08 06:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:37:14.175493167 +0000 UTC m=+177.305185788" watchObservedRunningTime="2025-10-08 06:37:14.176755753 +0000 UTC m=+177.306448344" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.217386 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bpkln" podStartSLOduration=2.5107870500000002 podStartE2EDuration="28.217369771s" podCreationTimestamp="2025-10-08 06:36:46 +0000 UTC" firstStartedPulling="2025-10-08 06:36:47.868784895 +0000 UTC m=+150.998477496" lastFinishedPulling="2025-10-08 06:37:13.575367606 +0000 UTC m=+176.705060217" observedRunningTime="2025-10-08 06:37:14.19981458 +0000 UTC m=+177.329507201" watchObservedRunningTime="2025-10-08 06:37:14.217369771 +0000 UTC m=+177.347062372" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.240057 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lsh6r" podStartSLOduration=2.4457346810000002 podStartE2EDuration="28.240036487s" podCreationTimestamp="2025-10-08 06:36:46 +0000 UTC" firstStartedPulling="2025-10-08 06:36:47.870358706 +0000 UTC m=+151.000051307" lastFinishedPulling="2025-10-08 06:37:13.664660502 +0000 UTC m=+176.794353113" observedRunningTime="2025-10-08 06:37:14.21876927 +0000 UTC m=+177.348461871" watchObservedRunningTime="2025-10-08 06:37:14.240036487 +0000 UTC m=+177.369729088" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.291499 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j7c47" podStartSLOduration=2.401777237 podStartE2EDuration="26.291482554s" podCreationTimestamp="2025-10-08 06:36:48 +0000 UTC" firstStartedPulling="2025-10-08 06:36:49.915760109 +0000 UTC m=+153.045452710" lastFinishedPulling="2025-10-08 06:37:13.805465386 +0000 UTC m=+176.935158027" observedRunningTime="2025-10-08 06:37:14.268085997 +0000 UTC m=+177.397778598" watchObservedRunningTime="2025-10-08 06:37:14.291482554 +0000 UTC m=+177.421175145" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.314725 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vx29t" podStartSLOduration=3.414132319 podStartE2EDuration="30.314709886s" podCreationTimestamp="2025-10-08 06:36:44 +0000 UTC" firstStartedPulling="2025-10-08 06:36:46.852836195 +0000 UTC m=+149.982528796" lastFinishedPulling="2025-10-08 06:37:13.753413732 +0000 UTC m=+176.883106363" observedRunningTime="2025-10-08 06:37:14.292856133 +0000 UTC m=+177.422548734" watchObservedRunningTime="2025-10-08 06:37:14.314709886 +0000 UTC m=+177.444402487" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.316893 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4vjhq" podStartSLOduration=2.4342860330000002 podStartE2EDuration="27.316886278s" podCreationTimestamp="2025-10-08 06:36:47 +0000 UTC" firstStartedPulling="2025-10-08 06:36:48.906112124 +0000 UTC m=+152.035804725" lastFinishedPulling="2025-10-08 06:37:13.788712359 +0000 UTC m=+176.918404970" observedRunningTime="2025-10-08 06:37:14.314634894 +0000 UTC m=+177.444327495" watchObservedRunningTime="2025-10-08 06:37:14.316886278 +0000 UTC m=+177.446578869" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.344864 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ddt6r" podStartSLOduration=3.585502513 podStartE2EDuration="30.344849035s" podCreationTimestamp="2025-10-08 06:36:44 +0000 UTC" firstStartedPulling="2025-10-08 06:36:46.850687442 +0000 UTC m=+149.980380043" lastFinishedPulling="2025-10-08 06:37:13.610033924 +0000 UTC m=+176.739726565" observedRunningTime="2025-10-08 06:37:14.341770027 +0000 UTC m=+177.471462628" watchObservedRunningTime="2025-10-08 06:37:14.344849035 +0000 UTC m=+177.474541636" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.362395 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tkq5d" podStartSLOduration=3.403884915 podStartE2EDuration="30.362374865s" podCreationTimestamp="2025-10-08 06:36:44 +0000 UTC" firstStartedPulling="2025-10-08 06:36:46.848692445 +0000 UTC m=+149.978385046" lastFinishedPulling="2025-10-08 06:37:13.807182395 +0000 UTC m=+176.936874996" observedRunningTime="2025-10-08 06:37:14.360777769 +0000 UTC m=+177.490470360" watchObservedRunningTime="2025-10-08 06:37:14.362374865 +0000 UTC m=+177.492067466" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.875180 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:37:14 crc kubenswrapper[4958]: I1008 06:37:14.875232 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.321351 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.321420 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.353068 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.353120 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.485430 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.485738 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.487207 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.500722 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1995579b-a766-4328-93f2-ad982ec7d4fe-kubelet-dir\") pod \"1995579b-a766-4328-93f2-ad982ec7d4fe\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.500779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1995579b-a766-4328-93f2-ad982ec7d4fe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1995579b-a766-4328-93f2-ad982ec7d4fe" (UID: "1995579b-a766-4328-93f2-ad982ec7d4fe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.500820 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1995579b-a766-4328-93f2-ad982ec7d4fe-kube-api-access\") pod \"1995579b-a766-4328-93f2-ad982ec7d4fe\" (UID: \"1995579b-a766-4328-93f2-ad982ec7d4fe\") " Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.506310 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1995579b-a766-4328-93f2-ad982ec7d4fe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.515719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1995579b-a766-4328-93f2-ad982ec7d4fe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1995579b-a766-4328-93f2-ad982ec7d4fe" (UID: "1995579b-a766-4328-93f2-ad982ec7d4fe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:37:15 crc kubenswrapper[4958]: I1008 06:37:15.607200 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1995579b-a766-4328-93f2-ad982ec7d4fe-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.073620 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ddt6r" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="registry-server" probeResult="failure" output=< Oct 08 06:37:16 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 06:37:16 crc kubenswrapper[4958]: > Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.163524 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1995579b-a766-4328-93f2-ad982ec7d4fe","Type":"ContainerDied","Data":"f869202ca97a21941996442f386b73a7571d2901c870735d496740c82bac2500"} Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.163612 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f869202ca97a21941996442f386b73a7571d2901c870735d496740c82bac2500" Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.164606 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.375120 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tkq5d" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="registry-server" probeResult="failure" output=< Oct 08 06:37:16 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 06:37:16 crc kubenswrapper[4958]: > Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.438846 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vx29t" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="registry-server" probeResult="failure" output=< Oct 08 06:37:16 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 06:37:16 crc kubenswrapper[4958]: > Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.523479 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n2k7x" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="registry-server" probeResult="failure" output=< Oct 08 06:37:16 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 06:37:16 crc kubenswrapper[4958]: > Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.872899 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.873405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:37:16 crc kubenswrapper[4958]: I1008 06:37:16.935318 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:37:17 crc kubenswrapper[4958]: I1008 06:37:17.243059 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:37:17 crc kubenswrapper[4958]: I1008 06:37:17.243543 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:37:17 crc kubenswrapper[4958]: I1008 06:37:17.308800 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:37:18 crc kubenswrapper[4958]: I1008 06:37:18.247772 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:37:18 crc kubenswrapper[4958]: I1008 06:37:18.253653 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:37:18 crc kubenswrapper[4958]: I1008 06:37:18.253767 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:37:18 crc kubenswrapper[4958]: I1008 06:37:18.253797 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:37:18 crc kubenswrapper[4958]: I1008 06:37:18.659795 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:37:18 crc kubenswrapper[4958]: I1008 06:37:18.660736 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:37:19 crc kubenswrapper[4958]: I1008 06:37:19.327919 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4vjhq" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="registry-server" probeResult="failure" output=< Oct 08 06:37:19 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 06:37:19 crc kubenswrapper[4958]: > Oct 08 06:37:19 crc kubenswrapper[4958]: I1008 06:37:19.529230 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsh6r"] Oct 08 06:37:19 crc kubenswrapper[4958]: I1008 06:37:19.727011 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j7c47" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="registry-server" probeResult="failure" output=< Oct 08 06:37:19 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 06:37:19 crc kubenswrapper[4958]: > Oct 08 06:37:20 crc kubenswrapper[4958]: I1008 06:37:20.114646 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmks4" Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.208367 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lsh6r" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="registry-server" containerID="cri-o://386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247" gracePeriod=2 Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.773054 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.912785 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-catalog-content\") pod \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.912865 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-utilities\") pod \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.912898 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xj8t\" (UniqueName: \"kubernetes.io/projected/cc2f2f8d-5af1-4fde-b658-851131f8cd40-kube-api-access-7xj8t\") pod \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\" (UID: \"cc2f2f8d-5af1-4fde-b658-851131f8cd40\") " Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.914570 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-utilities" (OuterVolumeSpecName: "utilities") pod "cc2f2f8d-5af1-4fde-b658-851131f8cd40" (UID: "cc2f2f8d-5af1-4fde-b658-851131f8cd40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.921923 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc2f2f8d-5af1-4fde-b658-851131f8cd40-kube-api-access-7xj8t" (OuterVolumeSpecName: "kube-api-access-7xj8t") pod "cc2f2f8d-5af1-4fde-b658-851131f8cd40" (UID: "cc2f2f8d-5af1-4fde-b658-851131f8cd40"). InnerVolumeSpecName "kube-api-access-7xj8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:37:21 crc kubenswrapper[4958]: I1008 06:37:21.931348 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc2f2f8d-5af1-4fde-b658-851131f8cd40" (UID: "cc2f2f8d-5af1-4fde-b658-851131f8cd40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.013990 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.014035 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2f2f8d-5af1-4fde-b658-851131f8cd40-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.014049 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xj8t\" (UniqueName: \"kubernetes.io/projected/cc2f2f8d-5af1-4fde-b658-851131f8cd40-kube-api-access-7xj8t\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.214968 4958 generic.go:334] "Generic (PLEG): container finished" podID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerID="386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247" exitCode=0 Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.215023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsh6r" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.215015 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsh6r" event={"ID":"cc2f2f8d-5af1-4fde-b658-851131f8cd40","Type":"ContainerDied","Data":"386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247"} Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.215554 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsh6r" event={"ID":"cc2f2f8d-5af1-4fde-b658-851131f8cd40","Type":"ContainerDied","Data":"2c02b5db684971466fd2fdca02338a9ece6787181efeec55bc72d2883fb297bc"} Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.215584 4958 scope.go:117] "RemoveContainer" containerID="386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.234536 4958 scope.go:117] "RemoveContainer" containerID="595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.255137 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsh6r"] Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.256358 4958 scope.go:117] "RemoveContainer" containerID="e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.258742 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsh6r"] Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.277513 4958 scope.go:117] "RemoveContainer" containerID="386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247" Oct 08 06:37:22 crc kubenswrapper[4958]: E1008 06:37:22.277921 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247\": container with ID starting with 386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247 not found: ID does not exist" containerID="386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.277985 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247"} err="failed to get container status \"386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247\": rpc error: code = NotFound desc = could not find container \"386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247\": container with ID starting with 386d616abcddd22ef323c9ae842e7731b4ca7b381f7409c2be809626f5308247 not found: ID does not exist" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.278032 4958 scope.go:117] "RemoveContainer" containerID="595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d" Oct 08 06:37:22 crc kubenswrapper[4958]: E1008 06:37:22.278405 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d\": container with ID starting with 595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d not found: ID does not exist" containerID="595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.278458 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d"} err="failed to get container status \"595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d\": rpc error: code = NotFound desc = could not find container \"595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d\": container with ID starting with 595dab6a43b3684859d85897120d68fa3697e18c165b2882a97639fb056beb5d not found: ID does not exist" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.278489 4958 scope.go:117] "RemoveContainer" containerID="e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3" Oct 08 06:37:22 crc kubenswrapper[4958]: E1008 06:37:22.279035 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3\": container with ID starting with e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3 not found: ID does not exist" containerID="e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3" Oct 08 06:37:22 crc kubenswrapper[4958]: I1008 06:37:22.279089 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3"} err="failed to get container status \"e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3\": rpc error: code = NotFound desc = could not find container \"e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3\": container with ID starting with e975cd6df3c41242da554e368ad17894016b5302b5d093cdc0074177f8e8bbc3 not found: ID does not exist" Oct 08 06:37:23 crc kubenswrapper[4958]: I1008 06:37:23.587520 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" path="/var/lib/kubelet/pods/cc2f2f8d-5af1-4fde-b658-851131f8cd40/volumes" Oct 08 06:37:24 crc kubenswrapper[4958]: I1008 06:37:24.942736 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.017834 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.392819 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.425138 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.463670 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.494109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.551198 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.627378 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.681251 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 08 06:37:25 crc kubenswrapper[4958]: I1008 06:37:25.924869 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tkq5d"] Oct 08 06:37:27 crc kubenswrapper[4958]: I1008 06:37:27.252383 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tkq5d" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="registry-server" containerID="cri-o://03bf6e7116d4279690832b27da1d77a3600c00a8b607cf36831d0381027ae1db" gracePeriod=2 Oct 08 06:37:27 crc kubenswrapper[4958]: I1008 06:37:27.730135 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2k7x"] Oct 08 06:37:27 crc kubenswrapper[4958]: I1008 06:37:27.730456 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n2k7x" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="registry-server" containerID="cri-o://36916b5d5b11498c26307f139ecb8c6dc6fef1c5e135c5b39eedf2407b9b97a9" gracePeriod=2 Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.273542 4958 generic.go:334] "Generic (PLEG): container finished" podID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerID="36916b5d5b11498c26307f139ecb8c6dc6fef1c5e135c5b39eedf2407b9b97a9" exitCode=0 Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.273767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2k7x" event={"ID":"246ee01a-d75a-4d2d-8eaf-b63787dffff7","Type":"ContainerDied","Data":"36916b5d5b11498c26307f139ecb8c6dc6fef1c5e135c5b39eedf2407b9b97a9"} Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.281207 4958 generic.go:334] "Generic (PLEG): container finished" podID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerID="03bf6e7116d4279690832b27da1d77a3600c00a8b607cf36831d0381027ae1db" exitCode=0 Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.281272 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkq5d" event={"ID":"e35d1ee6-16a1-4017-991c-17ebda75d0a0","Type":"ContainerDied","Data":"03bf6e7116d4279690832b27da1d77a3600c00a8b607cf36831d0381027ae1db"} Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.313125 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.366604 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.704290 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.757714 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.912988 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:37:28 crc kubenswrapper[4958]: I1008 06:37:28.917471 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.016536 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-utilities\") pod \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.016630 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-utilities\") pod \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.016653 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-catalog-content\") pod \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.016702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwlz\" (UniqueName: \"kubernetes.io/projected/e35d1ee6-16a1-4017-991c-17ebda75d0a0-kube-api-access-jkwlz\") pod \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\" (UID: \"e35d1ee6-16a1-4017-991c-17ebda75d0a0\") " Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.016725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-catalog-content\") pod \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.016759 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cffvz\" (UniqueName: \"kubernetes.io/projected/246ee01a-d75a-4d2d-8eaf-b63787dffff7-kube-api-access-cffvz\") pod \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\" (UID: \"246ee01a-d75a-4d2d-8eaf-b63787dffff7\") " Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.018470 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-utilities" (OuterVolumeSpecName: "utilities") pod "246ee01a-d75a-4d2d-8eaf-b63787dffff7" (UID: "246ee01a-d75a-4d2d-8eaf-b63787dffff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.018561 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-utilities" (OuterVolumeSpecName: "utilities") pod "e35d1ee6-16a1-4017-991c-17ebda75d0a0" (UID: "e35d1ee6-16a1-4017-991c-17ebda75d0a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.026209 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35d1ee6-16a1-4017-991c-17ebda75d0a0-kube-api-access-jkwlz" (OuterVolumeSpecName: "kube-api-access-jkwlz") pod "e35d1ee6-16a1-4017-991c-17ebda75d0a0" (UID: "e35d1ee6-16a1-4017-991c-17ebda75d0a0"). InnerVolumeSpecName "kube-api-access-jkwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.030079 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246ee01a-d75a-4d2d-8eaf-b63787dffff7-kube-api-access-cffvz" (OuterVolumeSpecName: "kube-api-access-cffvz") pod "246ee01a-d75a-4d2d-8eaf-b63787dffff7" (UID: "246ee01a-d75a-4d2d-8eaf-b63787dffff7"). InnerVolumeSpecName "kube-api-access-cffvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.063449 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e35d1ee6-16a1-4017-991c-17ebda75d0a0" (UID: "e35d1ee6-16a1-4017-991c-17ebda75d0a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.071395 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "246ee01a-d75a-4d2d-8eaf-b63787dffff7" (UID: "246ee01a-d75a-4d2d-8eaf-b63787dffff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.122382 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.122416 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cffvz\" (UniqueName: \"kubernetes.io/projected/246ee01a-d75a-4d2d-8eaf-b63787dffff7-kube-api-access-cffvz\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.122428 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.122437 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/246ee01a-d75a-4d2d-8eaf-b63787dffff7-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.122446 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e35d1ee6-16a1-4017-991c-17ebda75d0a0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.122456 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwlz\" (UniqueName: \"kubernetes.io/projected/e35d1ee6-16a1-4017-991c-17ebda75d0a0-kube-api-access-jkwlz\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.287842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2k7x" event={"ID":"246ee01a-d75a-4d2d-8eaf-b63787dffff7","Type":"ContainerDied","Data":"a22a87fd7d89702ee4cd981ab9415af14a5b5404ff5b981ca1d3b6d34d0ffc77"} Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.287897 4958 scope.go:117] "RemoveContainer" containerID="36916b5d5b11498c26307f139ecb8c6dc6fef1c5e135c5b39eedf2407b9b97a9" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.288051 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2k7x" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.292172 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tkq5d" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.292669 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tkq5d" event={"ID":"e35d1ee6-16a1-4017-991c-17ebda75d0a0","Type":"ContainerDied","Data":"ee09a496636751db19c64d684d1d4388365a84a3b507032a14246966b4baa5e4"} Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.312519 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2k7x"] Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.315203 4958 scope.go:117] "RemoveContainer" containerID="38526ffa0b82c72ae23247c86025a1ec1d743568c1523a937325aa3acc655a01" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.328243 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n2k7x"] Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.361040 4958 scope.go:117] "RemoveContainer" containerID="334b03495f3c4d36b6a27044922314117c3f3598dbd41b2da0e2d78265229f93" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.375359 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tkq5d"] Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.377508 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tkq5d"] Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.382780 4958 scope.go:117] "RemoveContainer" containerID="03bf6e7116d4279690832b27da1d77a3600c00a8b607cf36831d0381027ae1db" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.395641 4958 scope.go:117] "RemoveContainer" containerID="2687dec5a9761f6f550f7470b7db66c8f2d5cc5c64b3160c9a4d9e57982be822" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.408166 4958 scope.go:117] "RemoveContainer" containerID="1a0a490057b9a2adf0c9d797326233eae36c4fe1764135be0277322987d43b43" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.583916 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" path="/var/lib/kubelet/pods/246ee01a-d75a-4d2d-8eaf-b63787dffff7/volumes" Oct 08 06:37:29 crc kubenswrapper[4958]: I1008 06:37:29.584488 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" path="/var/lib/kubelet/pods/e35d1ee6-16a1-4017-991c-17ebda75d0a0/volumes" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.126260 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j7c47"] Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.128594 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j7c47" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="registry-server" containerID="cri-o://d0e1f2b511b74718bbf0fc8a75a6f7ebe475d7f82a2a992e84483714a71384b1" gracePeriod=2 Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.320599 4958 generic.go:334] "Generic (PLEG): container finished" podID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerID="d0e1f2b511b74718bbf0fc8a75a6f7ebe475d7f82a2a992e84483714a71384b1" exitCode=0 Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.320682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7c47" event={"ID":"1dbd2bb0-6520-4322-b92e-22dae7a66022","Type":"ContainerDied","Data":"d0e1f2b511b74718bbf0fc8a75a6f7ebe475d7f82a2a992e84483714a71384b1"} Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.567694 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.668641 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-catalog-content\") pod \"1dbd2bb0-6520-4322-b92e-22dae7a66022\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.668771 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvhzx\" (UniqueName: \"kubernetes.io/projected/1dbd2bb0-6520-4322-b92e-22dae7a66022-kube-api-access-lvhzx\") pod \"1dbd2bb0-6520-4322-b92e-22dae7a66022\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.669624 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-utilities\") pod \"1dbd2bb0-6520-4322-b92e-22dae7a66022\" (UID: \"1dbd2bb0-6520-4322-b92e-22dae7a66022\") " Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.670491 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-utilities" (OuterVolumeSpecName: "utilities") pod "1dbd2bb0-6520-4322-b92e-22dae7a66022" (UID: "1dbd2bb0-6520-4322-b92e-22dae7a66022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.677544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbd2bb0-6520-4322-b92e-22dae7a66022-kube-api-access-lvhzx" (OuterVolumeSpecName: "kube-api-access-lvhzx") pod "1dbd2bb0-6520-4322-b92e-22dae7a66022" (UID: "1dbd2bb0-6520-4322-b92e-22dae7a66022"). InnerVolumeSpecName "kube-api-access-lvhzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.770921 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.770989 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvhzx\" (UniqueName: \"kubernetes.io/projected/1dbd2bb0-6520-4322-b92e-22dae7a66022-kube-api-access-lvhzx\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.775746 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dbd2bb0-6520-4322-b92e-22dae7a66022" (UID: "1dbd2bb0-6520-4322-b92e-22dae7a66022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:37:32 crc kubenswrapper[4958]: I1008 06:37:32.872436 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbd2bb0-6520-4322-b92e-22dae7a66022-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.328531 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j7c47" Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.328424 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j7c47" event={"ID":"1dbd2bb0-6520-4322-b92e-22dae7a66022","Type":"ContainerDied","Data":"8dd4e4c3d5890b6af68119877809313e69deb56bfdb29908969c50541634a1bf"} Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.335655 4958 scope.go:117] "RemoveContainer" containerID="d0e1f2b511b74718bbf0fc8a75a6f7ebe475d7f82a2a992e84483714a71384b1" Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.369037 4958 scope.go:117] "RemoveContainer" containerID="a151dc86ab74d4569aa9f704a062b9d75820861744c6da31cdaa21fd5a7e7f52" Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.383371 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j7c47"] Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.386756 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j7c47"] Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.394866 4958 scope.go:117] "RemoveContainer" containerID="bc6c05b209232b280635049dc2815b73d95806f065187d72df67ba446b3dd8a8" Oct 08 06:37:33 crc kubenswrapper[4958]: I1008 06:37:33.588207 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" path="/var/lib/kubelet/pods/1dbd2bb0-6520-4322-b92e-22dae7a66022/volumes" Oct 08 06:37:36 crc kubenswrapper[4958]: I1008 06:37:36.845508 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:37:36 crc kubenswrapper[4958]: I1008 06:37:36.845779 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:38:06 crc kubenswrapper[4958]: I1008 06:38:06.845454 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:38:06 crc kubenswrapper[4958]: I1008 06:38:06.846242 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:38:06 crc kubenswrapper[4958]: I1008 06:38:06.846319 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:38:06 crc kubenswrapper[4958]: I1008 06:38:06.847182 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:38:06 crc kubenswrapper[4958]: I1008 06:38:06.847291 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0" gracePeriod=600 Oct 08 06:38:07 crc kubenswrapper[4958]: I1008 06:38:07.544140 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0" exitCode=0 Oct 08 06:38:07 crc kubenswrapper[4958]: I1008 06:38:07.544647 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0"} Oct 08 06:38:07 crc kubenswrapper[4958]: I1008 06:38:07.544726 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"d9a8d1fa51eedb96724c2de1c07585b6ffbd9798530ee0eadb4bd6aeefdec0f8"} Oct 08 06:38:18 crc kubenswrapper[4958]: I1008 06:38:18.014416 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mhg9b"] Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.052258 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerName="oauth-openshift" containerID="cri-o://44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218" gracePeriod=15 Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.577479 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.618940 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58444664d6-pp2nb"] Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619258 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619280 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619298 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619312 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619331 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619343 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619359 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619370 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619393 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619403 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619417 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619427 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619446 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619456 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619469 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619482 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619496 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerName="oauth-openshift" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619507 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerName="oauth-openshift" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619519 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619528 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619543 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619553 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="extract-content" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619568 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619578 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619596 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1995579b-a766-4328-93f2-ad982ec7d4fe" containerName="pruner" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619606 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1995579b-a766-4328-93f2-ad982ec7d4fe" containerName="pruner" Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.619620 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619630 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="extract-utilities" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619783 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1995579b-a766-4328-93f2-ad982ec7d4fe" containerName="pruner" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619803 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="246ee01a-d75a-4d2d-8eaf-b63787dffff7" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619823 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc2f2f8d-5af1-4fde-b658-851131f8cd40" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619840 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerName="oauth-openshift" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619856 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35d1ee6-16a1-4017-991c-17ebda75d0a0" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.619874 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbd2bb0-6520-4322-b92e-22dae7a66022" containerName="registry-server" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.620527 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.641391 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58444664d6-pp2nb"] Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729550 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-service-ca\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729614 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-login\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729664 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-serving-cert\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729713 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz9nb\" (UniqueName: \"kubernetes.io/projected/ed1e22c8-bc14-4e16-b349-442d188ac881-kube-api-access-tz9nb\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729752 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-idp-0-file-data\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-router-certs\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729813 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-error\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729843 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-ocp-branding-template\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729890 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-session\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729942 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-trusted-ca-bundle\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.729999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-cliconfig\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730030 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-provider-selection\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730059 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-dir\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730129 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-policies\") pod \"ed1e22c8-bc14-4e16-b349-442d188ac881\" (UID: \"ed1e22c8-bc14-4e16-b349-442d188ac881\") " Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-session\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730382 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-login\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730447 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd285101-080b-469a-9d25-f97842b7b0de-audit-dir\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730476 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730503 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-audit-policies\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730519 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730545 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84b2q\" (UniqueName: \"kubernetes.io/projected/cd285101-080b-469a-9d25-f97842b7b0de-kube-api-access-84b2q\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730729 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-service-ca\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-router-certs\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730788 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-error\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730848 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730861 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730923 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.730942 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.732258 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.733128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.734360 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.744785 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.745268 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.745641 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1e22c8-bc14-4e16-b349-442d188ac881-kube-api-access-tz9nb" (OuterVolumeSpecName: "kube-api-access-tz9nb") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "kube-api-access-tz9nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.749730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.750067 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.750299 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.750500 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.751263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.751276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ed1e22c8-bc14-4e16-b349-442d188ac881" (UID: "ed1e22c8-bc14-4e16-b349-442d188ac881"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.760001 4958 generic.go:334] "Generic (PLEG): container finished" podID="ed1e22c8-bc14-4e16-b349-442d188ac881" containerID="44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218" exitCode=0 Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.760047 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.760050 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" event={"ID":"ed1e22c8-bc14-4e16-b349-442d188ac881","Type":"ContainerDied","Data":"44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218"} Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.760077 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mhg9b" event={"ID":"ed1e22c8-bc14-4e16-b349-442d188ac881","Type":"ContainerDied","Data":"68c0d2b08f61762b0c9f135288c6a4d3119cfe963b64ecf7e24a231ea1f50e83"} Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.760094 4958 scope.go:117] "RemoveContainer" containerID="44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.823802 4958 scope.go:117] "RemoveContainer" containerID="44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.824596 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mhg9b"] Oct 08 06:38:43 crc kubenswrapper[4958]: E1008 06:38:43.825457 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218\": container with ID starting with 44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218 not found: ID does not exist" containerID="44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.825492 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218"} err="failed to get container status \"44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218\": rpc error: code = NotFound desc = could not find container \"44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218\": container with ID starting with 44b74263f72f5bd2059d37392f5c4f9932c0885bdd45ad7d9ede6ee970c22218 not found: ID does not exist" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831763 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84b2q\" (UniqueName: \"kubernetes.io/projected/cd285101-080b-469a-9d25-f97842b7b0de-kube-api-access-84b2q\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831821 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-service-ca\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831838 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-router-certs\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831854 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-error\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831899 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-session\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831939 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-login\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.831979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd285101-080b-469a-9d25-f97842b7b0de-audit-dir\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832001 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832015 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-audit-policies\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832041 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832078 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832088 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832100 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832111 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz9nb\" (UniqueName: \"kubernetes.io/projected/ed1e22c8-bc14-4e16-b349-442d188ac881-kube-api-access-tz9nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832119 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832128 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832284 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832294 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832302 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832312 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832321 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed1e22c8-bc14-4e16-b349-442d188ac881-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.832330 4958 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed1e22c8-bc14-4e16-b349-442d188ac881-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.834552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-service-ca\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.835299 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mhg9b"] Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.837349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.837678 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.837999 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.838082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cd285101-080b-469a-9d25-f97842b7b0de-audit-dir\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.838252 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-audit-policies\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.838393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-login\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.838844 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.839010 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.841195 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.841485 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-user-template-error\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.842584 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-router-certs\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.842743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cd285101-080b-469a-9d25-f97842b7b0de-v4-0-config-system-session\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.853124 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84b2q\" (UniqueName: \"kubernetes.io/projected/cd285101-080b-469a-9d25-f97842b7b0de-kube-api-access-84b2q\") pod \"oauth-openshift-58444664d6-pp2nb\" (UID: \"cd285101-080b-469a-9d25-f97842b7b0de\") " pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:43 crc kubenswrapper[4958]: I1008 06:38:43.948397 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:44 crc kubenswrapper[4958]: I1008 06:38:44.237474 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58444664d6-pp2nb"] Oct 08 06:38:44 crc kubenswrapper[4958]: I1008 06:38:44.770875 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" event={"ID":"cd285101-080b-469a-9d25-f97842b7b0de","Type":"ContainerStarted","Data":"e4e34f7b562a8edbee143e3ae724fd3057bebb8c01200c378f5a3234800180b6"} Oct 08 06:38:44 crc kubenswrapper[4958]: I1008 06:38:44.770988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" event={"ID":"cd285101-080b-469a-9d25-f97842b7b0de","Type":"ContainerStarted","Data":"6c9531664b077b4f33992b72ac4d577011777d585b209ddf846c38292e319ff6"} Oct 08 06:38:44 crc kubenswrapper[4958]: I1008 06:38:44.771335 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:44 crc kubenswrapper[4958]: I1008 06:38:44.798135 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" podStartSLOduration=26.798110886 podStartE2EDuration="26.798110886s" podCreationTimestamp="2025-10-08 06:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:38:44.794607872 +0000 UTC m=+267.924300473" watchObservedRunningTime="2025-10-08 06:38:44.798110886 +0000 UTC m=+267.927803517" Oct 08 06:38:45 crc kubenswrapper[4958]: I1008 06:38:45.037390 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58444664d6-pp2nb" Oct 08 06:38:45 crc kubenswrapper[4958]: I1008 06:38:45.586415 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1e22c8-bc14-4e16-b349-442d188ac881" path="/var/lib/kubelet/pods/ed1e22c8-bc14-4e16-b349-442d188ac881/volumes" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.561543 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ddt6r"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.563191 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ddt6r" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="registry-server" containerID="cri-o://a58a2705acbdd5a830714db4cac790aff0406c7aa13d81f1e552a66e43fbf55d" gracePeriod=30 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.572879 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx29t"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.573146 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vx29t" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="registry-server" containerID="cri-o://fc459fb2ea0a16c674c880a8a23ce370c12c74816b9ef9c3346807ea249034cd" gracePeriod=30 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.604311 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wj4ql"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.604586 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpkln"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.605263 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4vjhq"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.605390 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtmm"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.604786 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerName="marketplace-operator" containerID="cri-o://fa65a591efd7e2bd108249901c495c32d33561148b2ff9777d464f375b415c4b" gracePeriod=30 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.609798 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4vjhq" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="registry-server" containerID="cri-o://0b9fa21be7c67daa729e86249a38ed55cd75ddff7c98d5a617dac92231560d49" gracePeriod=30 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.610378 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bpkln" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="registry-server" containerID="cri-o://5711c3e44a0fe732ac43c60d132d7c6d9fe7e9d3fcaa9ed807e53bea1eb1fa4b" gracePeriod=30 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.625626 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.648043 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtmm"] Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.705115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.705222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.705255 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvh9n\" (UniqueName: \"kubernetes.io/projected/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-kube-api-access-zvh9n\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.806100 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.806172 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvh9n\" (UniqueName: \"kubernetes.io/projected/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-kube-api-access-zvh9n\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.806245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.807914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.818170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.821767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvh9n\" (UniqueName: \"kubernetes.io/projected/8eb7ce86-513c-4373-b20a-1eb9eb0dd65d-kube-api-access-zvh9n\") pod \"marketplace-operator-79b997595-zbtmm\" (UID: \"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.849409 4958 generic.go:334] "Generic (PLEG): container finished" podID="c4403715-42d1-481f-b403-5f04b1772cd5" containerID="0b9fa21be7c67daa729e86249a38ed55cd75ddff7c98d5a617dac92231560d49" exitCode=0 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.849471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vjhq" event={"ID":"c4403715-42d1-481f-b403-5f04b1772cd5","Type":"ContainerDied","Data":"0b9fa21be7c67daa729e86249a38ed55cd75ddff7c98d5a617dac92231560d49"} Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.852280 4958 generic.go:334] "Generic (PLEG): container finished" podID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerID="fa65a591efd7e2bd108249901c495c32d33561148b2ff9777d464f375b415c4b" exitCode=0 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.852353 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" event={"ID":"fce98ad8-5ce1-4dd7-81a0-68606ba22262","Type":"ContainerDied","Data":"fa65a591efd7e2bd108249901c495c32d33561148b2ff9777d464f375b415c4b"} Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.853821 4958 generic.go:334] "Generic (PLEG): container finished" podID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerID="a58a2705acbdd5a830714db4cac790aff0406c7aa13d81f1e552a66e43fbf55d" exitCode=0 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.853879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerDied","Data":"a58a2705acbdd5a830714db4cac790aff0406c7aa13d81f1e552a66e43fbf55d"} Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.855944 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerID="fc459fb2ea0a16c674c880a8a23ce370c12c74816b9ef9c3346807ea249034cd" exitCode=0 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.855990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx29t" event={"ID":"7f1064e4-9481-4a97-a05d-f11d13be3e77","Type":"ContainerDied","Data":"fc459fb2ea0a16c674c880a8a23ce370c12c74816b9ef9c3346807ea249034cd"} Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.857906 4958 generic.go:334] "Generic (PLEG): container finished" podID="95db505d-76df-4691-ba83-04a1ae4381cb" containerID="5711c3e44a0fe732ac43c60d132d7c6d9fe7e9d3fcaa9ed807e53bea1eb1fa4b" exitCode=0 Oct 08 06:38:55 crc kubenswrapper[4958]: I1008 06:38:55.857932 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpkln" event={"ID":"95db505d-76df-4691-ba83-04a1ae4381cb","Type":"ContainerDied","Data":"5711c3e44a0fe732ac43c60d132d7c6d9fe7e9d3fcaa9ed807e53bea1eb1fa4b"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.033061 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.045425 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.113916 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds6sw\" (UniqueName: \"kubernetes.io/projected/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-kube-api-access-ds6sw\") pod \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.114406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-utilities\") pod \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.114515 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-catalog-content\") pod \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\" (UID: \"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.115802 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-utilities" (OuterVolumeSpecName: "utilities") pod "19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" (UID: "19ad4cdd-0fb7-4aab-9986-6bbcfddc1205"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.115920 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.120920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-kube-api-access-ds6sw" (OuterVolumeSpecName: "kube-api-access-ds6sw") pod "19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" (UID: "19ad4cdd-0fb7-4aab-9986-6bbcfddc1205"). InnerVolumeSpecName "kube-api-access-ds6sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.126514 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.154043 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.159788 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.165470 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" (UID: "19ad4cdd-0fb7-4aab-9986-6bbcfddc1205"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.215859 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-operator-metrics\") pod \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.215900 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-utilities\") pod \"c4403715-42d1-481f-b403-5f04b1772cd5\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.215928 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmtxd\" (UniqueName: \"kubernetes.io/projected/95db505d-76df-4691-ba83-04a1ae4381cb-kube-api-access-hmtxd\") pod \"95db505d-76df-4691-ba83-04a1ae4381cb\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.215946 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5qfv\" (UniqueName: \"kubernetes.io/projected/fce98ad8-5ce1-4dd7-81a0-68606ba22262-kube-api-access-q5qfv\") pod \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.215978 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr698\" (UniqueName: \"kubernetes.io/projected/c4403715-42d1-481f-b403-5f04b1772cd5-kube-api-access-vr698\") pod \"c4403715-42d1-481f-b403-5f04b1772cd5\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216004 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-catalog-content\") pod \"c4403715-42d1-481f-b403-5f04b1772cd5\" (UID: \"c4403715-42d1-481f-b403-5f04b1772cd5\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216022 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-catalog-content\") pod \"7f1064e4-9481-4a97-a05d-f11d13be3e77\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216042 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-trusted-ca\") pod \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\" (UID: \"fce98ad8-5ce1-4dd7-81a0-68606ba22262\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-utilities\") pod \"95db505d-76df-4691-ba83-04a1ae4381cb\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216084 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-utilities\") pod \"7f1064e4-9481-4a97-a05d-f11d13be3e77\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216099 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-catalog-content\") pod \"95db505d-76df-4691-ba83-04a1ae4381cb\" (UID: \"95db505d-76df-4691-ba83-04a1ae4381cb\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216721 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrtdm\" (UniqueName: \"kubernetes.io/projected/7f1064e4-9481-4a97-a05d-f11d13be3e77-kube-api-access-xrtdm\") pod \"7f1064e4-9481-4a97-a05d-f11d13be3e77\" (UID: \"7f1064e4-9481-4a97-a05d-f11d13be3e77\") " Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216885 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds6sw\" (UniqueName: \"kubernetes.io/projected/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-kube-api-access-ds6sw\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216897 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.216907 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.217990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-utilities" (OuterVolumeSpecName: "utilities") pod "c4403715-42d1-481f-b403-5f04b1772cd5" (UID: "c4403715-42d1-481f-b403-5f04b1772cd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.220010 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-utilities" (OuterVolumeSpecName: "utilities") pod "95db505d-76df-4691-ba83-04a1ae4381cb" (UID: "95db505d-76df-4691-ba83-04a1ae4381cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.220027 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fce98ad8-5ce1-4dd7-81a0-68606ba22262" (UID: "fce98ad8-5ce1-4dd7-81a0-68606ba22262"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.220208 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-utilities" (OuterVolumeSpecName: "utilities") pod "7f1064e4-9481-4a97-a05d-f11d13be3e77" (UID: "7f1064e4-9481-4a97-a05d-f11d13be3e77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.220792 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fce98ad8-5ce1-4dd7-81a0-68606ba22262" (UID: "fce98ad8-5ce1-4dd7-81a0-68606ba22262"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.220933 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4403715-42d1-481f-b403-5f04b1772cd5-kube-api-access-vr698" (OuterVolumeSpecName: "kube-api-access-vr698") pod "c4403715-42d1-481f-b403-5f04b1772cd5" (UID: "c4403715-42d1-481f-b403-5f04b1772cd5"). InnerVolumeSpecName "kube-api-access-vr698". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.225442 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95db505d-76df-4691-ba83-04a1ae4381cb-kube-api-access-hmtxd" (OuterVolumeSpecName: "kube-api-access-hmtxd") pod "95db505d-76df-4691-ba83-04a1ae4381cb" (UID: "95db505d-76df-4691-ba83-04a1ae4381cb"). InnerVolumeSpecName "kube-api-access-hmtxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.225627 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1064e4-9481-4a97-a05d-f11d13be3e77-kube-api-access-xrtdm" (OuterVolumeSpecName: "kube-api-access-xrtdm") pod "7f1064e4-9481-4a97-a05d-f11d13be3e77" (UID: "7f1064e4-9481-4a97-a05d-f11d13be3e77"). InnerVolumeSpecName "kube-api-access-xrtdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.231866 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce98ad8-5ce1-4dd7-81a0-68606ba22262-kube-api-access-q5qfv" (OuterVolumeSpecName: "kube-api-access-q5qfv") pod "fce98ad8-5ce1-4dd7-81a0-68606ba22262" (UID: "fce98ad8-5ce1-4dd7-81a0-68606ba22262"). InnerVolumeSpecName "kube-api-access-q5qfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.231918 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95db505d-76df-4691-ba83-04a1ae4381cb" (UID: "95db505d-76df-4691-ba83-04a1ae4381cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.277669 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f1064e4-9481-4a97-a05d-f11d13be3e77" (UID: "7f1064e4-9481-4a97-a05d-f11d13be3e77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.305113 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4403715-42d1-481f-b403-5f04b1772cd5" (UID: "c4403715-42d1-481f-b403-5f04b1772cd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317788 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317824 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317839 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317854 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317865 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1064e4-9481-4a97-a05d-f11d13be3e77-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317876 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95db505d-76df-4691-ba83-04a1ae4381cb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317886 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrtdm\" (UniqueName: \"kubernetes.io/projected/7f1064e4-9481-4a97-a05d-f11d13be3e77-kube-api-access-xrtdm\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317897 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fce98ad8-5ce1-4dd7-81a0-68606ba22262-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317910 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4403715-42d1-481f-b403-5f04b1772cd5-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317921 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmtxd\" (UniqueName: \"kubernetes.io/projected/95db505d-76df-4691-ba83-04a1ae4381cb-kube-api-access-hmtxd\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317932 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5qfv\" (UniqueName: \"kubernetes.io/projected/fce98ad8-5ce1-4dd7-81a0-68606ba22262-kube-api-access-q5qfv\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.317947 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr698\" (UniqueName: \"kubernetes.io/projected/c4403715-42d1-481f-b403-5f04b1772cd5-kube-api-access-vr698\") on node \"crc\" DevicePath \"\"" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.533053 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtmm"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.867134 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vjhq" event={"ID":"c4403715-42d1-481f-b403-5f04b1772cd5","Type":"ContainerDied","Data":"758d6bdf2a655bd15ea3f20905d792f16ea0f6cd4966f0b7048730d2056652db"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.867532 4958 scope.go:117] "RemoveContainer" containerID="0b9fa21be7c67daa729e86249a38ed55cd75ddff7c98d5a617dac92231560d49" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.867167 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vjhq" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.874793 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddt6r" event={"ID":"19ad4cdd-0fb7-4aab-9986-6bbcfddc1205","Type":"ContainerDied","Data":"49cb2652778e741143e3c77946666ca4fdf1b9e0774ee4a2148d2e3596062682"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.874835 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddt6r" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.877896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx29t" event={"ID":"7f1064e4-9481-4a97-a05d-f11d13be3e77","Type":"ContainerDied","Data":"54416aee36d089f2a282402f80a4a58d2b5bce6b6cc1c640b159980f350eb6a1"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.877989 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx29t" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.886162 4958 scope.go:117] "RemoveContainer" containerID="cc57746aad3cc841e827184b41e26bfb0420a0c193fcf0b7b1f58dbcd9c77968" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.886740 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.889748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wj4ql" event={"ID":"fce98ad8-5ce1-4dd7-81a0-68606ba22262","Type":"ContainerDied","Data":"ba792a288bc888fb247f81c3be7f5a17a7484a550d23feb622088da941f56f5e"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.893782 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" event={"ID":"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d","Type":"ContainerStarted","Data":"83192e3f8084dc6fa8c1fd5482a913a086f7d4e7cd0fa9c39349e207ad41a234"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.893834 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" event={"ID":"8eb7ce86-513c-4373-b20a-1eb9eb0dd65d","Type":"ContainerStarted","Data":"4c7cefb6e8e302f0d3a69b27a0772c5ab09197af3f7abdf6696904696634130d"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.894695 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.896555 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zbtmm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.896848 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" podUID="8eb7ce86-513c-4373-b20a-1eb9eb0dd65d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.900294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpkln" event={"ID":"95db505d-76df-4691-ba83-04a1ae4381cb","Type":"ContainerDied","Data":"565bb40e83a7546a28d29798fac5b2c59e1c5026756d807b06f0e8fd039ad125"} Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.900404 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpkln" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.919767 4958 scope.go:117] "RemoveContainer" containerID="137115a99a86e8668e523759e4306a7723c3bd0116ad8f42aba16515c10159d8" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.922215 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" podStartSLOduration=1.922195689 podStartE2EDuration="1.922195689s" podCreationTimestamp="2025-10-08 06:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:38:56.915634932 +0000 UTC m=+280.045327573" watchObservedRunningTime="2025-10-08 06:38:56.922195689 +0000 UTC m=+280.051888290" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.930443 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4vjhq"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.936017 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4vjhq"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.948073 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx29t"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.949673 4958 scope.go:117] "RemoveContainer" containerID="a58a2705acbdd5a830714db4cac790aff0406c7aa13d81f1e552a66e43fbf55d" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.956201 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vx29t"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.960209 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ddt6r"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.967357 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ddt6r"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.971515 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpkln"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.974541 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpkln"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.981048 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wj4ql"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.981232 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wj4ql"] Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.981802 4958 scope.go:117] "RemoveContainer" containerID="e4891c9b50ce76ee4590c2b5b8593b4d9832cc6627decfa8478a6d4182ada315" Oct 08 06:38:56 crc kubenswrapper[4958]: I1008 06:38:56.995746 4958 scope.go:117] "RemoveContainer" containerID="9e501bca235983bf779846afb7a58458404da499d4490a9fe23631e000724545" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.012826 4958 scope.go:117] "RemoveContainer" containerID="fc459fb2ea0a16c674c880a8a23ce370c12c74816b9ef9c3346807ea249034cd" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.034211 4958 scope.go:117] "RemoveContainer" containerID="61242440ad9fc300f362c49011a48760bd0e7162d18df9816f70f92cb3606376" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.046439 4958 scope.go:117] "RemoveContainer" containerID="60b7e2c1bb026cb1eccca639600a89d2fd163d53c1433950f3be337fb009255f" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.058172 4958 scope.go:117] "RemoveContainer" containerID="fa65a591efd7e2bd108249901c495c32d33561148b2ff9777d464f375b415c4b" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.073190 4958 scope.go:117] "RemoveContainer" containerID="5711c3e44a0fe732ac43c60d132d7c6d9fe7e9d3fcaa9ed807e53bea1eb1fa4b" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.087849 4958 scope.go:117] "RemoveContainer" containerID="ea7547cb3874432f164deb6618fc2d1c042e8ba2f08596c62b02fcfec0747c9b" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.113501 4958 scope.go:117] "RemoveContainer" containerID="f490e7e094962f1df9ba3e58c450b8a96c4e724bee2a4ce9e2369b5d2c049a4a" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.589279 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" path="/var/lib/kubelet/pods/19ad4cdd-0fb7-4aab-9986-6bbcfddc1205/volumes" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.591270 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" path="/var/lib/kubelet/pods/7f1064e4-9481-4a97-a05d-f11d13be3e77/volumes" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.592714 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" path="/var/lib/kubelet/pods/95db505d-76df-4691-ba83-04a1ae4381cb/volumes" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.595174 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" path="/var/lib/kubelet/pods/c4403715-42d1-481f-b403-5f04b1772cd5/volumes" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.596607 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" path="/var/lib/kubelet/pods/fce98ad8-5ce1-4dd7-81a0-68606ba22262/volumes" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.789470 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xz5ng"] Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.789925 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.789983 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790048 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerName="marketplace-operator" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790063 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerName="marketplace-operator" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790086 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790139 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790157 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790169 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790229 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790243 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790262 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790326 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790348 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790362 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790416 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790429 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790446 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790462 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790520 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790532 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790547 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790774 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="extract-utilities" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790793 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790805 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: E1008 06:38:57.790822 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.790871 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="extract-content" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.791157 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce98ad8-5ce1-4dd7-81a0-68606ba22262" containerName="marketplace-operator" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.791178 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ad4cdd-0fb7-4aab-9986-6bbcfddc1205" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.791192 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4403715-42d1-481f-b403-5f04b1772cd5" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.791258 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1064e4-9481-4a97-a05d-f11d13be3e77" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.791275 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="95db505d-76df-4691-ba83-04a1ae4381cb" containerName="registry-server" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.794401 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.797399 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.803476 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz5ng"] Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.841201 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5615b0-6b66-4d00-89b5-93bd6aa32858-catalog-content\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.841688 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wtf\" (UniqueName: \"kubernetes.io/projected/7d5615b0-6b66-4d00-89b5-93bd6aa32858-kube-api-access-k4wtf\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.841914 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5615b0-6b66-4d00-89b5-93bd6aa32858-utilities\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.914027 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zbtmm" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.943395 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wtf\" (UniqueName: \"kubernetes.io/projected/7d5615b0-6b66-4d00-89b5-93bd6aa32858-kube-api-access-k4wtf\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.943508 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5615b0-6b66-4d00-89b5-93bd6aa32858-utilities\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.943550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5615b0-6b66-4d00-89b5-93bd6aa32858-catalog-content\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.943998 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5615b0-6b66-4d00-89b5-93bd6aa32858-utilities\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.944375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5615b0-6b66-4d00-89b5-93bd6aa32858-catalog-content\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.982417 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-shghl"] Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.984455 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.988929 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shghl"] Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.989496 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 08 06:38:57 crc kubenswrapper[4958]: I1008 06:38:57.989538 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wtf\" (UniqueName: \"kubernetes.io/projected/7d5615b0-6b66-4d00-89b5-93bd6aa32858-kube-api-access-k4wtf\") pod \"redhat-marketplace-xz5ng\" (UID: \"7d5615b0-6b66-4d00-89b5-93bd6aa32858\") " pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.044489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-utilities\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.044552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-catalog-content\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.044641 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85kt8\" (UniqueName: \"kubernetes.io/projected/87768529-d604-44d6-b659-24d6464a2076-kube-api-access-85kt8\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.111316 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.145528 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-catalog-content\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.145668 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85kt8\" (UniqueName: \"kubernetes.io/projected/87768529-d604-44d6-b659-24d6464a2076-kube-api-access-85kt8\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.145779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-utilities\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.146100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-catalog-content\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.146452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-utilities\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.166380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85kt8\" (UniqueName: \"kubernetes.io/projected/87768529-d604-44d6-b659-24d6464a2076-kube-api-access-85kt8\") pod \"certified-operators-shghl\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.316590 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.324818 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz5ng"] Oct 08 06:38:58 crc kubenswrapper[4958]: W1008 06:38:58.331270 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5615b0_6b66_4d00_89b5_93bd6aa32858.slice/crio-49c4b4294a083d219c11aca62a26d1c1a429f8bc6037f1bcf78e16e71c4c7191 WatchSource:0}: Error finding container 49c4b4294a083d219c11aca62a26d1c1a429f8bc6037f1bcf78e16e71c4c7191: Status 404 returned error can't find the container with id 49c4b4294a083d219c11aca62a26d1c1a429f8bc6037f1bcf78e16e71c4c7191 Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.719077 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shghl"] Oct 08 06:38:58 crc kubenswrapper[4958]: W1008 06:38:58.726243 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87768529_d604_44d6_b659_24d6464a2076.slice/crio-33d64492d980a803dc1be26689219b443c09e8d5dfda3d9013c337647e635481 WatchSource:0}: Error finding container 33d64492d980a803dc1be26689219b443c09e8d5dfda3d9013c337647e635481: Status 404 returned error can't find the container with id 33d64492d980a803dc1be26689219b443c09e8d5dfda3d9013c337647e635481 Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.919617 4958 generic.go:334] "Generic (PLEG): container finished" podID="7d5615b0-6b66-4d00-89b5-93bd6aa32858" containerID="8da279f727b702839a5a368637e4cde0d42fd7a59b31c2cdea44a55b8b1c864c" exitCode=0 Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.919697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz5ng" event={"ID":"7d5615b0-6b66-4d00-89b5-93bd6aa32858","Type":"ContainerDied","Data":"8da279f727b702839a5a368637e4cde0d42fd7a59b31c2cdea44a55b8b1c864c"} Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.919747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz5ng" event={"ID":"7d5615b0-6b66-4d00-89b5-93bd6aa32858","Type":"ContainerStarted","Data":"49c4b4294a083d219c11aca62a26d1c1a429f8bc6037f1bcf78e16e71c4c7191"} Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.925283 4958 generic.go:334] "Generic (PLEG): container finished" podID="87768529-d604-44d6-b659-24d6464a2076" containerID="63b23bb5bc14e7184d053260545fec042a8c3effa209d4e800c8f7c1a56092ca" exitCode=0 Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.926190 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shghl" event={"ID":"87768529-d604-44d6-b659-24d6464a2076","Type":"ContainerDied","Data":"63b23bb5bc14e7184d053260545fec042a8c3effa209d4e800c8f7c1a56092ca"} Oct 08 06:38:58 crc kubenswrapper[4958]: I1008 06:38:58.926219 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shghl" event={"ID":"87768529-d604-44d6-b659-24d6464a2076","Type":"ContainerStarted","Data":"33d64492d980a803dc1be26689219b443c09e8d5dfda3d9013c337647e635481"} Oct 08 06:38:59 crc kubenswrapper[4958]: I1008 06:38:59.930503 4958 generic.go:334] "Generic (PLEG): container finished" podID="7d5615b0-6b66-4d00-89b5-93bd6aa32858" containerID="4e7f38dfbdf74949914cfd0036f2b6a616125aea17054709b5d81506039e4d13" exitCode=0 Oct 08 06:38:59 crc kubenswrapper[4958]: I1008 06:38:59.930593 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz5ng" event={"ID":"7d5615b0-6b66-4d00-89b5-93bd6aa32858","Type":"ContainerDied","Data":"4e7f38dfbdf74949914cfd0036f2b6a616125aea17054709b5d81506039e4d13"} Oct 08 06:38:59 crc kubenswrapper[4958]: I1008 06:38:59.932212 4958 generic.go:334] "Generic (PLEG): container finished" podID="87768529-d604-44d6-b659-24d6464a2076" containerID="4d5ad88b23cdad91b979874745420265d6b5ef69a410e76b718e2ba188026c0e" exitCode=0 Oct 08 06:38:59 crc kubenswrapper[4958]: I1008 06:38:59.932241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shghl" event={"ID":"87768529-d604-44d6-b659-24d6464a2076","Type":"ContainerDied","Data":"4d5ad88b23cdad91b979874745420265d6b5ef69a410e76b718e2ba188026c0e"} Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.180782 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j6jqg"] Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.181909 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.184533 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.201652 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6jqg"] Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.284773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c0f741-b149-4fc0-b2de-405c5d2bc0db-catalog-content\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.284815 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4bk\" (UniqueName: \"kubernetes.io/projected/59c0f741-b149-4fc0-b2de-405c5d2bc0db-kube-api-access-vr4bk\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.284874 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c0f741-b149-4fc0-b2de-405c5d2bc0db-utilities\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.387862 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c0f741-b149-4fc0-b2de-405c5d2bc0db-catalog-content\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.388247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4bk\" (UniqueName: \"kubernetes.io/projected/59c0f741-b149-4fc0-b2de-405c5d2bc0db-kube-api-access-vr4bk\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.388306 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c0f741-b149-4fc0-b2de-405c5d2bc0db-utilities\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.388439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c0f741-b149-4fc0-b2de-405c5d2bc0db-catalog-content\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.388717 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c0f741-b149-4fc0-b2de-405c5d2bc0db-utilities\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.394740 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhsrr"] Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.396905 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhsrr"] Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.397133 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.402019 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.425144 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4bk\" (UniqueName: \"kubernetes.io/projected/59c0f741-b149-4fc0-b2de-405c5d2bc0db-kube-api-access-vr4bk\") pod \"redhat-operators-j6jqg\" (UID: \"59c0f741-b149-4fc0-b2de-405c5d2bc0db\") " pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.489380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-utilities\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.489450 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-catalog-content\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.489478 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xp2t\" (UniqueName: \"kubernetes.io/projected/4125d6f2-8177-45b6-9b72-ed00864ff1ea-kube-api-access-9xp2t\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.498395 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.590981 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-utilities\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.591230 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-catalog-content\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.591250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xp2t\" (UniqueName: \"kubernetes.io/projected/4125d6f2-8177-45b6-9b72-ed00864ff1ea-kube-api-access-9xp2t\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.591705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-utilities\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.591818 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-catalog-content\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.619803 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xp2t\" (UniqueName: \"kubernetes.io/projected/4125d6f2-8177-45b6-9b72-ed00864ff1ea-kube-api-access-9xp2t\") pod \"community-operators-mhsrr\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.770874 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.917074 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6jqg"] Oct 08 06:39:00 crc kubenswrapper[4958]: W1008 06:39:00.927152 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c0f741_b149_4fc0_b2de_405c5d2bc0db.slice/crio-b5c124f33eb2a46cf0a49076ec80ffdc594c2bb03894b95b77573367be979799 WatchSource:0}: Error finding container b5c124f33eb2a46cf0a49076ec80ffdc594c2bb03894b95b77573367be979799: Status 404 returned error can't find the container with id b5c124f33eb2a46cf0a49076ec80ffdc594c2bb03894b95b77573367be979799 Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.938144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6jqg" event={"ID":"59c0f741-b149-4fc0-b2de-405c5d2bc0db","Type":"ContainerStarted","Data":"b5c124f33eb2a46cf0a49076ec80ffdc594c2bb03894b95b77573367be979799"} Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.942494 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz5ng" event={"ID":"7d5615b0-6b66-4d00-89b5-93bd6aa32858","Type":"ContainerStarted","Data":"673767c1b372f343b410511772b08742c9736b0e74f9eae3d7b2393060b7e6c7"} Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.945273 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shghl" event={"ID":"87768529-d604-44d6-b659-24d6464a2076","Type":"ContainerStarted","Data":"02f2f9489a649e1f06825fd5533b4e13ebc168a3a10b1fb79c64ddda79320549"} Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.966885 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xz5ng" podStartSLOduration=2.5410886379999997 podStartE2EDuration="3.966862596s" podCreationTimestamp="2025-10-08 06:38:57 +0000 UTC" firstStartedPulling="2025-10-08 06:38:58.926283116 +0000 UTC m=+282.055975727" lastFinishedPulling="2025-10-08 06:39:00.352057084 +0000 UTC m=+283.481749685" observedRunningTime="2025-10-08 06:39:00.965445292 +0000 UTC m=+284.095137893" watchObservedRunningTime="2025-10-08 06:39:00.966862596 +0000 UTC m=+284.096555197" Oct 08 06:39:00 crc kubenswrapper[4958]: I1008 06:39:00.983220 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-shghl" podStartSLOduration=2.5836969869999997 podStartE2EDuration="3.983188007s" podCreationTimestamp="2025-10-08 06:38:57 +0000 UTC" firstStartedPulling="2025-10-08 06:38:58.926642354 +0000 UTC m=+282.056334955" lastFinishedPulling="2025-10-08 06:39:00.326133374 +0000 UTC m=+283.455825975" observedRunningTime="2025-10-08 06:39:00.980628696 +0000 UTC m=+284.110321297" watchObservedRunningTime="2025-10-08 06:39:00.983188007 +0000 UTC m=+284.112880608" Oct 08 06:39:01 crc kubenswrapper[4958]: I1008 06:39:01.197320 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhsrr"] Oct 08 06:39:01 crc kubenswrapper[4958]: I1008 06:39:01.952386 4958 generic.go:334] "Generic (PLEG): container finished" podID="59c0f741-b149-4fc0-b2de-405c5d2bc0db" containerID="045ff9a444bbb1b6ab0a838baaeff54e5b6fe1e0549d075e539bb03c8a669d04" exitCode=0 Oct 08 06:39:01 crc kubenswrapper[4958]: I1008 06:39:01.952469 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6jqg" event={"ID":"59c0f741-b149-4fc0-b2de-405c5d2bc0db","Type":"ContainerDied","Data":"045ff9a444bbb1b6ab0a838baaeff54e5b6fe1e0549d075e539bb03c8a669d04"} Oct 08 06:39:01 crc kubenswrapper[4958]: I1008 06:39:01.954906 4958 generic.go:334] "Generic (PLEG): container finished" podID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerID="94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903" exitCode=0 Oct 08 06:39:01 crc kubenswrapper[4958]: I1008 06:39:01.955636 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhsrr" event={"ID":"4125d6f2-8177-45b6-9b72-ed00864ff1ea","Type":"ContainerDied","Data":"94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903"} Oct 08 06:39:01 crc kubenswrapper[4958]: I1008 06:39:01.955665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhsrr" event={"ID":"4125d6f2-8177-45b6-9b72-ed00864ff1ea","Type":"ContainerStarted","Data":"eda969cf7bc020d38b11d8cb509da278773bd0e7387d75039f04bcd28761745b"} Oct 08 06:39:02 crc kubenswrapper[4958]: I1008 06:39:02.962435 4958 generic.go:334] "Generic (PLEG): container finished" podID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerID="da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642" exitCode=0 Oct 08 06:39:02 crc kubenswrapper[4958]: I1008 06:39:02.962547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhsrr" event={"ID":"4125d6f2-8177-45b6-9b72-ed00864ff1ea","Type":"ContainerDied","Data":"da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642"} Oct 08 06:39:02 crc kubenswrapper[4958]: I1008 06:39:02.966407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6jqg" event={"ID":"59c0f741-b149-4fc0-b2de-405c5d2bc0db","Type":"ContainerStarted","Data":"25645b943ceff0b12d310f86b0e6930efe7d41f80b12a4bf152e063691a561b9"} Oct 08 06:39:03 crc kubenswrapper[4958]: I1008 06:39:03.973298 4958 generic.go:334] "Generic (PLEG): container finished" podID="59c0f741-b149-4fc0-b2de-405c5d2bc0db" containerID="25645b943ceff0b12d310f86b0e6930efe7d41f80b12a4bf152e063691a561b9" exitCode=0 Oct 08 06:39:03 crc kubenswrapper[4958]: I1008 06:39:03.973380 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6jqg" event={"ID":"59c0f741-b149-4fc0-b2de-405c5d2bc0db","Type":"ContainerDied","Data":"25645b943ceff0b12d310f86b0e6930efe7d41f80b12a4bf152e063691a561b9"} Oct 08 06:39:04 crc kubenswrapper[4958]: I1008 06:39:04.982904 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6jqg" event={"ID":"59c0f741-b149-4fc0-b2de-405c5d2bc0db","Type":"ContainerStarted","Data":"5ec508d6d2177502a64b18b6709b836cf097ae41a70de867a36bdc4de75cfd5e"} Oct 08 06:39:04 crc kubenswrapper[4958]: I1008 06:39:04.985815 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhsrr" event={"ID":"4125d6f2-8177-45b6-9b72-ed00864ff1ea","Type":"ContainerStarted","Data":"033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7"} Oct 08 06:39:05 crc kubenswrapper[4958]: I1008 06:39:05.002130 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j6jqg" podStartSLOduration=2.509561143 podStartE2EDuration="5.002099378s" podCreationTimestamp="2025-10-08 06:39:00 +0000 UTC" firstStartedPulling="2025-10-08 06:39:01.953910866 +0000 UTC m=+285.083603467" lastFinishedPulling="2025-10-08 06:39:04.446449071 +0000 UTC m=+287.576141702" observedRunningTime="2025-10-08 06:39:04.99801821 +0000 UTC m=+288.127710811" watchObservedRunningTime="2025-10-08 06:39:05.002099378 +0000 UTC m=+288.131792019" Oct 08 06:39:05 crc kubenswrapper[4958]: I1008 06:39:05.019223 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhsrr" podStartSLOduration=3.420962142 podStartE2EDuration="5.019201707s" podCreationTimestamp="2025-10-08 06:39:00 +0000 UTC" firstStartedPulling="2025-10-08 06:39:01.958090516 +0000 UTC m=+285.087783117" lastFinishedPulling="2025-10-08 06:39:03.556330081 +0000 UTC m=+286.686022682" observedRunningTime="2025-10-08 06:39:05.016258277 +0000 UTC m=+288.145950878" watchObservedRunningTime="2025-10-08 06:39:05.019201707 +0000 UTC m=+288.148894308" Oct 08 06:39:08 crc kubenswrapper[4958]: I1008 06:39:08.112429 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:39:08 crc kubenswrapper[4958]: I1008 06:39:08.112855 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:39:08 crc kubenswrapper[4958]: I1008 06:39:08.171791 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:39:08 crc kubenswrapper[4958]: I1008 06:39:08.316731 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:39:08 crc kubenswrapper[4958]: I1008 06:39:08.316809 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:39:08 crc kubenswrapper[4958]: I1008 06:39:08.365195 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:39:09 crc kubenswrapper[4958]: I1008 06:39:09.043658 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-shghl" Oct 08 06:39:09 crc kubenswrapper[4958]: I1008 06:39:09.053983 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xz5ng" Oct 08 06:39:10 crc kubenswrapper[4958]: I1008 06:39:10.498515 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:10 crc kubenswrapper[4958]: I1008 06:39:10.498877 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:10 crc kubenswrapper[4958]: I1008 06:39:10.562938 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:10 crc kubenswrapper[4958]: I1008 06:39:10.771324 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:10 crc kubenswrapper[4958]: I1008 06:39:10.771615 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:10 crc kubenswrapper[4958]: I1008 06:39:10.814036 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:39:11 crc kubenswrapper[4958]: I1008 06:39:11.086683 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j6jqg" Oct 08 06:39:11 crc kubenswrapper[4958]: I1008 06:39:11.087465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 06:40:36 crc kubenswrapper[4958]: I1008 06:40:36.844658 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:40:36 crc kubenswrapper[4958]: I1008 06:40:36.846197 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.189715 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h2x8t"] Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.191638 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.209559 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h2x8t"] Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337668 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-registry-certificates\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337729 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vlmj\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-kube-api-access-5vlmj\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-registry-tls\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337793 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337839 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-bound-sa-token\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.337967 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-trusted-ca\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.338010 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.369251 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.439466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-registry-certificates\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.439515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-registry-tls\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.439537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.439559 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vlmj\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-kube-api-access-5vlmj\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.439594 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.439640 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-bound-sa-token\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.440094 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-trusted-ca\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.440640 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.441985 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-trusted-ca\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.442439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-registry-certificates\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.448628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.448886 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-registry-tls\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.462415 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vlmj\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-kube-api-access-5vlmj\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.462984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce3e7c9e-81d2-434f-abf8-28c8856bf9bd-bound-sa-token\") pod \"image-registry-66df7c8f76-h2x8t\" (UID: \"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.507746 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.737722 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h2x8t"] Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.757057 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" event={"ID":"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd","Type":"ContainerStarted","Data":"65f22327960ab7a5081b2ce5cbecdfcf52d6c57ded8c1deb55c81edc01bf97cf"} Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.844983 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:41:06 crc kubenswrapper[4958]: I1008 06:41:06.845421 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:41:07 crc kubenswrapper[4958]: I1008 06:41:07.764935 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" event={"ID":"ce3e7c9e-81d2-434f-abf8-28c8856bf9bd","Type":"ContainerStarted","Data":"7c83d00086eb7361d9e3a7037c3302fa1556737a77f36b9c51506d2c2846479d"} Oct 08 06:41:07 crc kubenswrapper[4958]: I1008 06:41:07.765065 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:07 crc kubenswrapper[4958]: I1008 06:41:07.799397 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" podStartSLOduration=1.799372389 podStartE2EDuration="1.799372389s" podCreationTimestamp="2025-10-08 06:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:41:07.796308917 +0000 UTC m=+410.926001518" watchObservedRunningTime="2025-10-08 06:41:07.799372389 +0000 UTC m=+410.929065020" Oct 08 06:41:26 crc kubenswrapper[4958]: I1008 06:41:26.518787 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-h2x8t" Oct 08 06:41:26 crc kubenswrapper[4958]: I1008 06:41:26.594156 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7chhl"] Oct 08 06:41:36 crc kubenswrapper[4958]: I1008 06:41:36.845344 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:41:36 crc kubenswrapper[4958]: I1008 06:41:36.845994 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:41:36 crc kubenswrapper[4958]: I1008 06:41:36.846058 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:41:36 crc kubenswrapper[4958]: I1008 06:41:36.846790 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9a8d1fa51eedb96724c2de1c07585b6ffbd9798530ee0eadb4bd6aeefdec0f8"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:41:36 crc kubenswrapper[4958]: I1008 06:41:36.846919 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://d9a8d1fa51eedb96724c2de1c07585b6ffbd9798530ee0eadb4bd6aeefdec0f8" gracePeriod=600 Oct 08 06:41:37 crc kubenswrapper[4958]: I1008 06:41:37.961994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"d9a8d1fa51eedb96724c2de1c07585b6ffbd9798530ee0eadb4bd6aeefdec0f8"} Oct 08 06:41:37 crc kubenswrapper[4958]: I1008 06:41:37.962773 4958 scope.go:117] "RemoveContainer" containerID="54eadf42801afa94a6f820497392dc3d5156903cd3051a428350c72d2c8490f0" Oct 08 06:41:37 crc kubenswrapper[4958]: I1008 06:41:37.963416 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="d9a8d1fa51eedb96724c2de1c07585b6ffbd9798530ee0eadb4bd6aeefdec0f8" exitCode=0 Oct 08 06:41:37 crc kubenswrapper[4958]: I1008 06:41:37.963471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"c3b06042f5aa4fe24bf7e2c05c94bb353cf787b352b0bf1e1e243c939f67b05c"} Oct 08 06:41:51 crc kubenswrapper[4958]: I1008 06:41:51.654906 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" podUID="d8a63239-acb7-4fc9-8f62-ffa55f261901" containerName="registry" containerID="cri-o://49e85f150d81709f8671250d6585125f4bbb713c63c7f2d2312cd1f50af81a55" gracePeriod=30 Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.056608 4958 generic.go:334] "Generic (PLEG): container finished" podID="d8a63239-acb7-4fc9-8f62-ffa55f261901" containerID="49e85f150d81709f8671250d6585125f4bbb713c63c7f2d2312cd1f50af81a55" exitCode=0 Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.056743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" event={"ID":"d8a63239-acb7-4fc9-8f62-ffa55f261901","Type":"ContainerDied","Data":"49e85f150d81709f8671250d6585125f4bbb713c63c7f2d2312cd1f50af81a55"} Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.057017 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" event={"ID":"d8a63239-acb7-4fc9-8f62-ffa55f261901","Type":"ContainerDied","Data":"561a232fd2e94cc2a8c994f5bef0af39ebd708e49d09b43f54b37f418e168c87"} Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.057046 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561a232fd2e94cc2a8c994f5bef0af39ebd708e49d09b43f54b37f418e168c87" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.095719 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.230886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-tls\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.230934 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-trusted-ca\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231000 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-certificates\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231041 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527rf\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-kube-api-access-527rf\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231065 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a63239-acb7-4fc9-8f62-ffa55f261901-ca-trust-extracted\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231274 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-bound-sa-token\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231321 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a63239-acb7-4fc9-8f62-ffa55f261901-installation-pull-secrets\") pod \"d8a63239-acb7-4fc9-8f62-ffa55f261901\" (UID: \"d8a63239-acb7-4fc9-8f62-ffa55f261901\") " Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.231938 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.232772 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.237814 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.238273 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-kube-api-access-527rf" (OuterVolumeSpecName: "kube-api-access-527rf") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "kube-api-access-527rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.238490 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a63239-acb7-4fc9-8f62-ffa55f261901-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.240486 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.247291 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.252723 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a63239-acb7-4fc9-8f62-ffa55f261901-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d8a63239-acb7-4fc9-8f62-ffa55f261901" (UID: "d8a63239-acb7-4fc9-8f62-ffa55f261901"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.332928 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-527rf\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-kube-api-access-527rf\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.333044 4958 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d8a63239-acb7-4fc9-8f62-ffa55f261901-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.333064 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.333082 4958 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d8a63239-acb7-4fc9-8f62-ffa55f261901-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.333100 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.333117 4958 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:52 crc kubenswrapper[4958]: I1008 06:41:52.333134 4958 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d8a63239-acb7-4fc9-8f62-ffa55f261901-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 08 06:41:53 crc kubenswrapper[4958]: I1008 06:41:53.068707 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7chhl" Oct 08 06:41:53 crc kubenswrapper[4958]: I1008 06:41:53.112067 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7chhl"] Oct 08 06:41:53 crc kubenswrapper[4958]: I1008 06:41:53.115285 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7chhl"] Oct 08 06:41:53 crc kubenswrapper[4958]: I1008 06:41:53.589586 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a63239-acb7-4fc9-8f62-ffa55f261901" path="/var/lib/kubelet/pods/d8a63239-acb7-4fc9-8f62-ffa55f261901/volumes" Oct 08 06:43:17 crc kubenswrapper[4958]: I1008 06:43:17.743493 4958 scope.go:117] "RemoveContainer" containerID="49e85f150d81709f8671250d6585125f4bbb713c63c7f2d2312cd1f50af81a55" Oct 08 06:44:06 crc kubenswrapper[4958]: I1008 06:44:06.845178 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:44:06 crc kubenswrapper[4958]: I1008 06:44:06.845930 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:44:36 crc kubenswrapper[4958]: I1008 06:44:36.845071 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:44:36 crc kubenswrapper[4958]: I1008 06:44:36.845747 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.149868 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9"] Oct 08 06:45:00 crc kubenswrapper[4958]: E1008 06:45:00.150778 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a63239-acb7-4fc9-8f62-ffa55f261901" containerName="registry" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.150800 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a63239-acb7-4fc9-8f62-ffa55f261901" containerName="registry" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.151020 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a63239-acb7-4fc9-8f62-ffa55f261901" containerName="registry" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.151599 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.154404 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.154874 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.168156 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9"] Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.279161 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46c1e2e3-199c-4b0c-be9a-f47da27865fd-secret-volume\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.279269 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46c1e2e3-199c-4b0c-be9a-f47da27865fd-config-volume\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.279320 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llbm5\" (UniqueName: \"kubernetes.io/projected/46c1e2e3-199c-4b0c-be9a-f47da27865fd-kube-api-access-llbm5\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.380910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46c1e2e3-199c-4b0c-be9a-f47da27865fd-config-volume\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.381018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llbm5\" (UniqueName: \"kubernetes.io/projected/46c1e2e3-199c-4b0c-be9a-f47da27865fd-kube-api-access-llbm5\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.381137 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46c1e2e3-199c-4b0c-be9a-f47da27865fd-secret-volume\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.382496 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46c1e2e3-199c-4b0c-be9a-f47da27865fd-config-volume\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.390884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46c1e2e3-199c-4b0c-be9a-f47da27865fd-secret-volume\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.420220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llbm5\" (UniqueName: \"kubernetes.io/projected/46c1e2e3-199c-4b0c-be9a-f47da27865fd-kube-api-access-llbm5\") pod \"collect-profiles-29331765-jnlr9\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.493569 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:00 crc kubenswrapper[4958]: I1008 06:45:00.751794 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9"] Oct 08 06:45:01 crc kubenswrapper[4958]: I1008 06:45:01.270779 4958 generic.go:334] "Generic (PLEG): container finished" podID="46c1e2e3-199c-4b0c-be9a-f47da27865fd" containerID="63354289d7bbb216080433565b50f75d0c1634ab8787014cbc0467f69afad607" exitCode=0 Oct 08 06:45:01 crc kubenswrapper[4958]: I1008 06:45:01.270837 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" event={"ID":"46c1e2e3-199c-4b0c-be9a-f47da27865fd","Type":"ContainerDied","Data":"63354289d7bbb216080433565b50f75d0c1634ab8787014cbc0467f69afad607"} Oct 08 06:45:01 crc kubenswrapper[4958]: I1008 06:45:01.270876 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" event={"ID":"46c1e2e3-199c-4b0c-be9a-f47da27865fd","Type":"ContainerStarted","Data":"a978b44d51cb1c8537bda5a5ae7daa7249d438447a52afdc2a7a1f34eb240e3c"} Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.573725 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.708928 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llbm5\" (UniqueName: \"kubernetes.io/projected/46c1e2e3-199c-4b0c-be9a-f47da27865fd-kube-api-access-llbm5\") pod \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.709147 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46c1e2e3-199c-4b0c-be9a-f47da27865fd-config-volume\") pod \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.709224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46c1e2e3-199c-4b0c-be9a-f47da27865fd-secret-volume\") pod \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\" (UID: \"46c1e2e3-199c-4b0c-be9a-f47da27865fd\") " Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.710386 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c1e2e3-199c-4b0c-be9a-f47da27865fd-config-volume" (OuterVolumeSpecName: "config-volume") pod "46c1e2e3-199c-4b0c-be9a-f47da27865fd" (UID: "46c1e2e3-199c-4b0c-be9a-f47da27865fd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.719106 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c1e2e3-199c-4b0c-be9a-f47da27865fd-kube-api-access-llbm5" (OuterVolumeSpecName: "kube-api-access-llbm5") pod "46c1e2e3-199c-4b0c-be9a-f47da27865fd" (UID: "46c1e2e3-199c-4b0c-be9a-f47da27865fd"). InnerVolumeSpecName "kube-api-access-llbm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.719123 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c1e2e3-199c-4b0c-be9a-f47da27865fd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46c1e2e3-199c-4b0c-be9a-f47da27865fd" (UID: "46c1e2e3-199c-4b0c-be9a-f47da27865fd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.811094 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llbm5\" (UniqueName: \"kubernetes.io/projected/46c1e2e3-199c-4b0c-be9a-f47da27865fd-kube-api-access-llbm5\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.811150 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46c1e2e3-199c-4b0c-be9a-f47da27865fd-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:02 crc kubenswrapper[4958]: I1008 06:45:02.811171 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46c1e2e3-199c-4b0c-be9a-f47da27865fd-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:03 crc kubenswrapper[4958]: I1008 06:45:03.284938 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" event={"ID":"46c1e2e3-199c-4b0c-be9a-f47da27865fd","Type":"ContainerDied","Data":"a978b44d51cb1c8537bda5a5ae7daa7249d438447a52afdc2a7a1f34eb240e3c"} Oct 08 06:45:03 crc kubenswrapper[4958]: I1008 06:45:03.285000 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a978b44d51cb1c8537bda5a5ae7daa7249d438447a52afdc2a7a1f34eb240e3c" Oct 08 06:45:03 crc kubenswrapper[4958]: I1008 06:45:03.285034 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9" Oct 08 06:45:06 crc kubenswrapper[4958]: I1008 06:45:06.845436 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:45:06 crc kubenswrapper[4958]: I1008 06:45:06.845770 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:45:06 crc kubenswrapper[4958]: I1008 06:45:06.845826 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:45:06 crc kubenswrapper[4958]: I1008 06:45:06.846572 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3b06042f5aa4fe24bf7e2c05c94bb353cf787b352b0bf1e1e243c939f67b05c"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:45:06 crc kubenswrapper[4958]: I1008 06:45:06.846698 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://c3b06042f5aa4fe24bf7e2c05c94bb353cf787b352b0bf1e1e243c939f67b05c" gracePeriod=600 Oct 08 06:45:07 crc kubenswrapper[4958]: I1008 06:45:07.314897 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="c3b06042f5aa4fe24bf7e2c05c94bb353cf787b352b0bf1e1e243c939f67b05c" exitCode=0 Oct 08 06:45:07 crc kubenswrapper[4958]: I1008 06:45:07.314994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"c3b06042f5aa4fe24bf7e2c05c94bb353cf787b352b0bf1e1e243c939f67b05c"} Oct 08 06:45:07 crc kubenswrapper[4958]: I1008 06:45:07.315320 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"c231c00392b517e6cca660a4aec9d278a8fcf4be120c9a0359f28467806c28b5"} Oct 08 06:45:07 crc kubenswrapper[4958]: I1008 06:45:07.315350 4958 scope.go:117] "RemoveContainer" containerID="d9a8d1fa51eedb96724c2de1c07585b6ffbd9798530ee0eadb4bd6aeefdec0f8" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.391843 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89qtf"] Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.408445 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-controller" containerID="cri-o://938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.409361 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="sbdb" containerID="cri-o://b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.409435 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.409494 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-node" containerID="cri-o://2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.409614 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="nbdb" containerID="cri-o://a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.409621 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-acl-logging" containerID="cri-o://2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.409725 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="northd" containerID="cri-o://1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.471162 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" containerID="cri-o://573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" gracePeriod=30 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.569998 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/2.log" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.570426 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/1.log" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.570455 4958 generic.go:334] "Generic (PLEG): container finished" podID="0718b244-4835-4551-9013-6b3741845bb4" containerID="8fab763b267bd2df242ddfebc49b15e8f24cc97f493f46bc5c8d9414631ddff5" exitCode=2 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.570498 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerDied","Data":"8fab763b267bd2df242ddfebc49b15e8f24cc97f493f46bc5c8d9414631ddff5"} Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.570529 4958 scope.go:117] "RemoveContainer" containerID="d471e3c183709b97571c7a6c1dc430de5a6900f018e7c231cdde2ea6699a9580" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.570938 4958 scope.go:117] "RemoveContainer" containerID="8fab763b267bd2df242ddfebc49b15e8f24cc97f493f46bc5c8d9414631ddff5" Oct 08 06:45:39 crc kubenswrapper[4958]: E1008 06:45:39.571244 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hfzs9_openshift-multus(0718b244-4835-4551-9013-6b3741845bb4)\"" pod="openshift-multus/multus-hfzs9" podUID="0718b244-4835-4551-9013-6b3741845bb4" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.577734 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/3.log" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.587338 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovn-acl-logging/0.log" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.589097 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovn-controller/0.log" Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.598750 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" exitCode=0 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.598777 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" exitCode=0 Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.598853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} Oct 08 06:45:39 crc kubenswrapper[4958]: I1008 06:45:39.598872 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.185208 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/3.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.187797 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovn-acl-logging/0.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.188493 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovn-controller/0.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.189101 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.253836 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68469"] Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254280 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-acl-logging" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254317 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-acl-logging" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254340 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="nbdb" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254355 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="nbdb" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254376 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254394 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254412 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kubecfg-setup" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254427 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kubecfg-setup" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254452 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="northd" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254467 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="northd" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254487 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="sbdb" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254504 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="sbdb" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254529 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254544 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254563 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254579 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254601 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254617 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254633 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c1e2e3-199c-4b0c-be9a-f47da27865fd" containerName="collect-profiles" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254648 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c1e2e3-199c-4b0c-be9a-f47da27865fd" containerName="collect-profiles" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254673 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-node" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254689 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-node" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254711 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254726 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.254749 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254764 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254980 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.254997 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255015 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="nbdb" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255031 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-ovn-metrics" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255049 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="sbdb" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255064 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c1e2e3-199c-4b0c-be9a-f47da27865fd" containerName="collect-profiles" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255076 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="kube-rbac-proxy-node" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255093 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255107 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovn-acl-logging" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255125 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="northd" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255146 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.255460 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255474 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255627 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.255655 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerName="ovnkube-controller" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.258878 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.314071 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-var-lib-openvswitch\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.314149 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.314431 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-config\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.314857 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-netd\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315149 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315368 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-systemd-units\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315434 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-env-overrides\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315499 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-etc-openvswitch\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315561 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-systemd\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315634 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-kubelet\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315695 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-slash\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-log-socket\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315881 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-bin\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.315943 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-script-lib\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316026 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29trw\" (UniqueName: \"kubernetes.io/projected/272f74a5-c381-4909-b8a9-da60cbd17ddf-kube-api-access-29trw\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316099 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-ovn\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-netns\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.317072 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-node-log" (OuterVolumeSpecName: "node-log") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316146 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-slash" (OuterVolumeSpecName: "host-slash") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316153 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-log-socket" (OuterVolumeSpecName: "log-socket") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316270 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316293 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316347 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316571 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316725 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316765 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.317036 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.316940 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-node-log\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318312 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovn-node-metrics-cert\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318377 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-openvswitch\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-ovn-kubernetes\") pod \"272f74a5-c381-4909-b8a9-da60cbd17ddf\" (UID: \"272f74a5-c381-4909-b8a9-da60cbd17ddf\") " Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318584 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovnkube-config\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-kubelet\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318746 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-log-socket\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318820 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-cni-netd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-run-netns\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-env-overrides\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319420 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8fd\" (UniqueName: \"kubernetes.io/projected/058d0025-36f0-4d59-948f-f1fcdcc20ec1-kube-api-access-zb8fd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319491 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-systemd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319553 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.318413 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319023 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319736 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-node-log\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-cni-bin\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.319961 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-var-lib-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.320029 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-systemd-units\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.320111 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovnkube-script-lib\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.323399 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272f74a5-c381-4909-b8a9-da60cbd17ddf-kube-api-access-29trw" (OuterVolumeSpecName: "kube-api-access-29trw") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "kube-api-access-29trw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326152 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-slash\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.325335 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-ovn\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326323 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovn-node-metrics-cert\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326405 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-run-ovn-kubernetes\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326455 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-etc-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326564 4958 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326580 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326592 4958 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326606 4958 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326618 4958 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326629 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326643 4958 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326655 4958 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-slash\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326667 4958 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-log-socket\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326680 4958 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326693 4958 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326705 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326716 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29trw\" (UniqueName: \"kubernetes.io/projected/272f74a5-c381-4909-b8a9-da60cbd17ddf-kube-api-access-29trw\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326728 4958 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326738 4958 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326749 4958 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-node-log\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326759 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/272f74a5-c381-4909-b8a9-da60cbd17ddf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326771 4958 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.326782 4958 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.340325 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "272f74a5-c381-4909-b8a9-da60cbd17ddf" (UID: "272f74a5-c381-4909-b8a9-da60cbd17ddf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.427745 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovnkube-config\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.427823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-kubelet\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.427855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-log-socket\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.427891 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-kubelet\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.427910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-cni-netd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.427996 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-run-netns\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428050 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-cni-netd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428127 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-run-netns\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-env-overrides\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8fd\" (UniqueName: \"kubernetes.io/projected/058d0025-36f0-4d59-948f-f1fcdcc20ec1-kube-api-access-zb8fd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428207 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-systemd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428225 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428274 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-node-log\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-cni-bin\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428327 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-systemd-units\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428346 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-var-lib-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428377 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovnkube-script-lib\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428407 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-slash\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428425 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-ovn\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428447 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovn-node-metrics-cert\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-run-ovn-kubernetes\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428494 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-etc-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428540 4958 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/272f74a5-c381-4909-b8a9-da60cbd17ddf-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428565 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovnkube-config\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428590 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-var-lib-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428570 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-etc-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-systemd-units\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-slash\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-env-overrides\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428662 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-ovn\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428682 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-systemd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-run-openvswitch\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428723 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-run-ovn-kubernetes\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428764 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-node-log\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428807 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-host-cni-bin\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.428861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/058d0025-36f0-4d59-948f-f1fcdcc20ec1-log-socket\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.429085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovnkube-script-lib\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.434650 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/058d0025-36f0-4d59-948f-f1fcdcc20ec1-ovn-node-metrics-cert\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.448328 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8fd\" (UniqueName: \"kubernetes.io/projected/058d0025-36f0-4d59-948f-f1fcdcc20ec1-kube-api-access-zb8fd\") pod \"ovnkube-node-68469\" (UID: \"058d0025-36f0-4d59-948f-f1fcdcc20ec1\") " pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.592197 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.607329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovnkube-controller/3.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.611210 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovn-acl-logging/0.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.611801 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89qtf_272f74a5-c381-4909-b8a9-da60cbd17ddf/ovn-controller/0.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612628 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" exitCode=0 Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612672 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" exitCode=0 Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612686 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" exitCode=0 Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612699 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" exitCode=0 Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612710 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" exitCode=143 Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612721 4958 generic.go:334] "Generic (PLEG): container finished" podID="272f74a5-c381-4909-b8a9-da60cbd17ddf" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" exitCode=143 Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612822 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612920 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612937 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612971 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612982 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.612992 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613002 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613012 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613021 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613031 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613046 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" event={"ID":"272f74a5-c381-4909-b8a9-da60cbd17ddf","Type":"ContainerDied","Data":"4f0203b1da4fcd68233969c68fd9f1006afce0f8b9153a0df18ef6dc6956d7b0"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613065 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613076 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613087 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613097 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613108 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613118 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613127 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613137 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613146 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613156 4958 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613180 4958 scope.go:117] "RemoveContainer" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.613342 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89qtf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.617995 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/2.log" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.683404 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.705349 4958 scope.go:117] "RemoveContainer" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.706751 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89qtf"] Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.713170 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89qtf"] Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.738667 4958 scope.go:117] "RemoveContainer" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.754272 4958 scope.go:117] "RemoveContainer" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.788731 4958 scope.go:117] "RemoveContainer" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.827073 4958 scope.go:117] "RemoveContainer" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.847174 4958 scope.go:117] "RemoveContainer" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.864986 4958 scope.go:117] "RemoveContainer" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.889909 4958 scope.go:117] "RemoveContainer" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.908685 4958 scope.go:117] "RemoveContainer" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.909291 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": container with ID starting with 573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf not found: ID does not exist" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.909458 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} err="failed to get container status \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": rpc error: code = NotFound desc = could not find container \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": container with ID starting with 573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.909486 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.910398 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": container with ID starting with efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73 not found: ID does not exist" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.910469 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} err="failed to get container status \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": rpc error: code = NotFound desc = could not find container \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": container with ID starting with efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.910500 4958 scope.go:117] "RemoveContainer" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.910883 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": container with ID starting with b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba not found: ID does not exist" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.910976 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} err="failed to get container status \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": rpc error: code = NotFound desc = could not find container \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": container with ID starting with b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.911020 4958 scope.go:117] "RemoveContainer" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.911457 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": container with ID starting with a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64 not found: ID does not exist" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.911501 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} err="failed to get container status \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": rpc error: code = NotFound desc = could not find container \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": container with ID starting with a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.911528 4958 scope.go:117] "RemoveContainer" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.912032 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": container with ID starting with 1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714 not found: ID does not exist" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.912091 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} err="failed to get container status \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": rpc error: code = NotFound desc = could not find container \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": container with ID starting with 1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.912112 4958 scope.go:117] "RemoveContainer" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.912462 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": container with ID starting with d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a not found: ID does not exist" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.912538 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} err="failed to get container status \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": rpc error: code = NotFound desc = could not find container \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": container with ID starting with d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.912565 4958 scope.go:117] "RemoveContainer" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.912923 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": container with ID starting with 2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947 not found: ID does not exist" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.913005 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} err="failed to get container status \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": rpc error: code = NotFound desc = could not find container \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": container with ID starting with 2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.913029 4958 scope.go:117] "RemoveContainer" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.913515 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": container with ID starting with 2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3 not found: ID does not exist" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.913561 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} err="failed to get container status \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": rpc error: code = NotFound desc = could not find container \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": container with ID starting with 2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.913589 4958 scope.go:117] "RemoveContainer" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.914028 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": container with ID starting with 938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007 not found: ID does not exist" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.914069 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} err="failed to get container status \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": rpc error: code = NotFound desc = could not find container \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": container with ID starting with 938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.914095 4958 scope.go:117] "RemoveContainer" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" Oct 08 06:45:40 crc kubenswrapper[4958]: E1008 06:45:40.914498 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": container with ID starting with 9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3 not found: ID does not exist" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.914563 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} err="failed to get container status \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": rpc error: code = NotFound desc = could not find container \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": container with ID starting with 9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.914615 4958 scope.go:117] "RemoveContainer" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.915165 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} err="failed to get container status \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": rpc error: code = NotFound desc = could not find container \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": container with ID starting with 573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.915226 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.915725 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} err="failed to get container status \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": rpc error: code = NotFound desc = could not find container \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": container with ID starting with efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.915752 4958 scope.go:117] "RemoveContainer" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.916192 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} err="failed to get container status \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": rpc error: code = NotFound desc = could not find container \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": container with ID starting with b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.916237 4958 scope.go:117] "RemoveContainer" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.916623 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} err="failed to get container status \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": rpc error: code = NotFound desc = could not find container \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": container with ID starting with a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.916663 4958 scope.go:117] "RemoveContainer" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.917111 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} err="failed to get container status \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": rpc error: code = NotFound desc = could not find container \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": container with ID starting with 1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.917163 4958 scope.go:117] "RemoveContainer" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.917937 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} err="failed to get container status \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": rpc error: code = NotFound desc = could not find container \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": container with ID starting with d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.918013 4958 scope.go:117] "RemoveContainer" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.918431 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} err="failed to get container status \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": rpc error: code = NotFound desc = could not find container \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": container with ID starting with 2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.918461 4958 scope.go:117] "RemoveContainer" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.918889 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} err="failed to get container status \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": rpc error: code = NotFound desc = could not find container \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": container with ID starting with 2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.918935 4958 scope.go:117] "RemoveContainer" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.919338 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} err="failed to get container status \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": rpc error: code = NotFound desc = could not find container \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": container with ID starting with 938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.919383 4958 scope.go:117] "RemoveContainer" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.919741 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} err="failed to get container status \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": rpc error: code = NotFound desc = could not find container \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": container with ID starting with 9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.919795 4958 scope.go:117] "RemoveContainer" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.920265 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} err="failed to get container status \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": rpc error: code = NotFound desc = could not find container \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": container with ID starting with 573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.920322 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.921697 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} err="failed to get container status \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": rpc error: code = NotFound desc = could not find container \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": container with ID starting with efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.921740 4958 scope.go:117] "RemoveContainer" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.922183 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} err="failed to get container status \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": rpc error: code = NotFound desc = could not find container \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": container with ID starting with b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.922236 4958 scope.go:117] "RemoveContainer" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.922756 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} err="failed to get container status \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": rpc error: code = NotFound desc = could not find container \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": container with ID starting with a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.922801 4958 scope.go:117] "RemoveContainer" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.923310 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} err="failed to get container status \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": rpc error: code = NotFound desc = could not find container \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": container with ID starting with 1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.923349 4958 scope.go:117] "RemoveContainer" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.924024 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} err="failed to get container status \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": rpc error: code = NotFound desc = could not find container \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": container with ID starting with d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.924060 4958 scope.go:117] "RemoveContainer" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.924580 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} err="failed to get container status \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": rpc error: code = NotFound desc = could not find container \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": container with ID starting with 2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.924615 4958 scope.go:117] "RemoveContainer" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.925231 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} err="failed to get container status \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": rpc error: code = NotFound desc = could not find container \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": container with ID starting with 2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.925269 4958 scope.go:117] "RemoveContainer" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.925687 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} err="failed to get container status \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": rpc error: code = NotFound desc = could not find container \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": container with ID starting with 938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.925735 4958 scope.go:117] "RemoveContainer" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.926310 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} err="failed to get container status \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": rpc error: code = NotFound desc = could not find container \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": container with ID starting with 9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.926340 4958 scope.go:117] "RemoveContainer" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.926726 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} err="failed to get container status \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": rpc error: code = NotFound desc = could not find container \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": container with ID starting with 573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.926768 4958 scope.go:117] "RemoveContainer" containerID="efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.927252 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73"} err="failed to get container status \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": rpc error: code = NotFound desc = could not find container \"efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73\": container with ID starting with efd26288978e9239b48ea24a3bc0d907af9b173635a963edbb2cb49f38647c73 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.927315 4958 scope.go:117] "RemoveContainer" containerID="b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.927796 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba"} err="failed to get container status \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": rpc error: code = NotFound desc = could not find container \"b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba\": container with ID starting with b81c4cc3d01964bef61463fa6a7953dc2cebdc5d8a5b0805ee1005ee6d974aba not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.927851 4958 scope.go:117] "RemoveContainer" containerID="a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.928254 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64"} err="failed to get container status \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": rpc error: code = NotFound desc = could not find container \"a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64\": container with ID starting with a4bc322c1f0e649859dae7819cc765bd8aa55274523f44a5c8b4ee7d5b0cde64 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.928287 4958 scope.go:117] "RemoveContainer" containerID="1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.928589 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714"} err="failed to get container status \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": rpc error: code = NotFound desc = could not find container \"1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714\": container with ID starting with 1c289c3d5970977a704612cfdafd48b7332e4a9dd227b81289ccd63c36391714 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.928618 4958 scope.go:117] "RemoveContainer" containerID="d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.928869 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a"} err="failed to get container status \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": rpc error: code = NotFound desc = could not find container \"d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a\": container with ID starting with d9b351651fc162823d9b85b02d2ae63c959fe88c19af759cf2a11e417197295a not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.928915 4958 scope.go:117] "RemoveContainer" containerID="2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.929166 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947"} err="failed to get container status \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": rpc error: code = NotFound desc = could not find container \"2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947\": container with ID starting with 2f5bfe0e2580ff493b0a684488197046e6d314b84f9f732124df172ad3b86947 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.929199 4958 scope.go:117] "RemoveContainer" containerID="2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.929418 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3"} err="failed to get container status \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": rpc error: code = NotFound desc = could not find container \"2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3\": container with ID starting with 2b6d1ee4d4c9ad59b72a22b481fc49c24645f4c645d2f245c6d08aa15b1afdf3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.929441 4958 scope.go:117] "RemoveContainer" containerID="938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.930027 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007"} err="failed to get container status \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": rpc error: code = NotFound desc = could not find container \"938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007\": container with ID starting with 938f849e5d828383bc90abda94cef8b429dad5a8aeaa0762a2cf77db98aec007 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.930051 4958 scope.go:117] "RemoveContainer" containerID="9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.930368 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3"} err="failed to get container status \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": rpc error: code = NotFound desc = could not find container \"9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3\": container with ID starting with 9de5bd802a24129c52b0a76ffa739ae92a6770b1c155d0d2c9435d787dd059e3 not found: ID does not exist" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.930451 4958 scope.go:117] "RemoveContainer" containerID="573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf" Oct 08 06:45:40 crc kubenswrapper[4958]: I1008 06:45:40.930829 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf"} err="failed to get container status \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": rpc error: code = NotFound desc = could not find container \"573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf\": container with ID starting with 573b3d52994e35a594071656430ba169635cc163d933002b3c922f4a8f8da0bf not found: ID does not exist" Oct 08 06:45:41 crc kubenswrapper[4958]: I1008 06:45:41.597629 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272f74a5-c381-4909-b8a9-da60cbd17ddf" path="/var/lib/kubelet/pods/272f74a5-c381-4909-b8a9-da60cbd17ddf/volumes" Oct 08 06:45:41 crc kubenswrapper[4958]: I1008 06:45:41.631439 4958 generic.go:334] "Generic (PLEG): container finished" podID="058d0025-36f0-4d59-948f-f1fcdcc20ec1" containerID="2855fb03b2c67447d6a5360b0e8a4a5831af20dd42efa9b0420795402dba23b2" exitCode=0 Oct 08 06:45:41 crc kubenswrapper[4958]: I1008 06:45:41.631574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerDied","Data":"2855fb03b2c67447d6a5360b0e8a4a5831af20dd42efa9b0420795402dba23b2"} Oct 08 06:45:41 crc kubenswrapper[4958]: I1008 06:45:41.631669 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"c3f5b24b270b50afbf1f1a4079d732377bfb557ca861c233be72d44e56dff559"} Oct 08 06:45:42 crc kubenswrapper[4958]: I1008 06:45:42.656335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"c6413d67743f0225bd3965a103b36fdc064085a6f1e842995ddfd1d22c2e2d7c"} Oct 08 06:45:42 crc kubenswrapper[4958]: I1008 06:45:42.656891 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"aa13e11da55fb040fcb10eaaf6a41e8ed308cb6fba569341a9e5dfc71c9f9222"} Oct 08 06:45:42 crc kubenswrapper[4958]: I1008 06:45:42.656905 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"029f4c6805580b3a6c06f2a83b93499bd8bd8b6133f27e4b99da29802b1f0e7f"} Oct 08 06:45:42 crc kubenswrapper[4958]: I1008 06:45:42.656914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"b254b24c7755c476b59ce3ea8026f36bf407b7a22d428226c3f308ea752bfdb6"} Oct 08 06:45:42 crc kubenswrapper[4958]: I1008 06:45:42.656922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"0685c6a5a4c83600f98bed6ee3aa48424ff480620a402b818de8cdbf14d0391c"} Oct 08 06:45:42 crc kubenswrapper[4958]: I1008 06:45:42.656930 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"fa43ca6748411cad2d48d190e44b5315a1cb53f2bc502be6e32a5eb250c4c39b"} Oct 08 06:45:45 crc kubenswrapper[4958]: I1008 06:45:45.680663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"f9898020f35cd38fcd5d3709e483a53c7a81db8c6323a8a162212dc8e61ec42b"} Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.695360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68469" event={"ID":"058d0025-36f0-4d59-948f-f1fcdcc20ec1","Type":"ContainerStarted","Data":"00d9734694a3c72193fc224b3035e6a962ecd3075c5c09bb95cd741737614dee"} Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.695891 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.696246 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.696444 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.739553 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-68469" podStartSLOduration=7.739530643 podStartE2EDuration="7.739530643s" podCreationTimestamp="2025-10-08 06:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:45:47.733072989 +0000 UTC m=+690.862765620" watchObservedRunningTime="2025-10-08 06:45:47.739530643 +0000 UTC m=+690.869223254" Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.761657 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:47 crc kubenswrapper[4958]: I1008 06:45:47.762871 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.691577 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hctrv"] Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.693168 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.695745 4958 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vlvfh" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.695886 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.697020 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.698530 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.715368 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hctrv"] Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.846243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c8zx\" (UniqueName: \"kubernetes.io/projected/bd93696c-c727-4b22-8426-cc909f7f64e5-kube-api-access-5c8zx\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.846317 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd93696c-c727-4b22-8426-cc909f7f64e5-node-mnt\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.846335 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd93696c-c727-4b22-8426-cc909f7f64e5-crc-storage\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.947790 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c8zx\" (UniqueName: \"kubernetes.io/projected/bd93696c-c727-4b22-8426-cc909f7f64e5-kube-api-access-5c8zx\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.947993 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd93696c-c727-4b22-8426-cc909f7f64e5-node-mnt\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.948058 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd93696c-c727-4b22-8426-cc909f7f64e5-crc-storage\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.948343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd93696c-c727-4b22-8426-cc909f7f64e5-node-mnt\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.949328 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd93696c-c727-4b22-8426-cc909f7f64e5-crc-storage\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:48 crc kubenswrapper[4958]: I1008 06:45:48.982941 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c8zx\" (UniqueName: \"kubernetes.io/projected/bd93696c-c727-4b22-8426-cc909f7f64e5-kube-api-access-5c8zx\") pod \"crc-storage-crc-hctrv\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: I1008 06:45:49.014743 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.056365 4958 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(b4bda84ee679b2c5e89cdaabbf893a9756b88ace6a29d6f97815e68799e86e7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.056475 4958 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(b4bda84ee679b2c5e89cdaabbf893a9756b88ace6a29d6f97815e68799e86e7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.056529 4958 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(b4bda84ee679b2c5e89cdaabbf893a9756b88ace6a29d6f97815e68799e86e7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.056613 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hctrv_crc-storage(bd93696c-c727-4b22-8426-cc909f7f64e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hctrv_crc-storage(bd93696c-c727-4b22-8426-cc909f7f64e5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(b4bda84ee679b2c5e89cdaabbf893a9756b88ace6a29d6f97815e68799e86e7c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hctrv" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" Oct 08 06:45:49 crc kubenswrapper[4958]: I1008 06:45:49.709501 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: I1008 06:45:49.710361 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.761157 4958 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(ed01f364cd9dc4ee13f8efc0bca19544473d80766a0198a43554e6b82e07cb42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.761234 4958 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(ed01f364cd9dc4ee13f8efc0bca19544473d80766a0198a43554e6b82e07cb42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.761274 4958 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(ed01f364cd9dc4ee13f8efc0bca19544473d80766a0198a43554e6b82e07cb42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:45:49 crc kubenswrapper[4958]: E1008 06:45:49.761340 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hctrv_crc-storage(bd93696c-c727-4b22-8426-cc909f7f64e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hctrv_crc-storage(bd93696c-c727-4b22-8426-cc909f7f64e5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(ed01f364cd9dc4ee13f8efc0bca19544473d80766a0198a43554e6b82e07cb42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hctrv" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" Oct 08 06:45:51 crc kubenswrapper[4958]: I1008 06:45:51.576788 4958 scope.go:117] "RemoveContainer" containerID="8fab763b267bd2df242ddfebc49b15e8f24cc97f493f46bc5c8d9414631ddff5" Oct 08 06:45:51 crc kubenswrapper[4958]: E1008 06:45:51.577450 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hfzs9_openshift-multus(0718b244-4835-4551-9013-6b3741845bb4)\"" pod="openshift-multus/multus-hfzs9" podUID="0718b244-4835-4551-9013-6b3741845bb4" Oct 08 06:46:00 crc kubenswrapper[4958]: I1008 06:46:00.576295 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:00 crc kubenswrapper[4958]: I1008 06:46:00.577396 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:00 crc kubenswrapper[4958]: E1008 06:46:00.617766 4958 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(fcec67734e6176bcdef7eccc276122db0b0b6c5e02d2542848fe5abcfb76d0b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 08 06:46:00 crc kubenswrapper[4958]: E1008 06:46:00.617840 4958 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(fcec67734e6176bcdef7eccc276122db0b0b6c5e02d2542848fe5abcfb76d0b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:00 crc kubenswrapper[4958]: E1008 06:46:00.617864 4958 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(fcec67734e6176bcdef7eccc276122db0b0b6c5e02d2542848fe5abcfb76d0b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:00 crc kubenswrapper[4958]: E1008 06:46:00.617910 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hctrv_crc-storage(bd93696c-c727-4b22-8426-cc909f7f64e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hctrv_crc-storage(bd93696c-c727-4b22-8426-cc909f7f64e5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hctrv_crc-storage_bd93696c-c727-4b22-8426-cc909f7f64e5_0(fcec67734e6176bcdef7eccc276122db0b0b6c5e02d2542848fe5abcfb76d0b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hctrv" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" Oct 08 06:46:06 crc kubenswrapper[4958]: I1008 06:46:06.577122 4958 scope.go:117] "RemoveContainer" containerID="8fab763b267bd2df242ddfebc49b15e8f24cc97f493f46bc5c8d9414631ddff5" Oct 08 06:46:06 crc kubenswrapper[4958]: I1008 06:46:06.821620 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hfzs9_0718b244-4835-4551-9013-6b3741845bb4/kube-multus/2.log" Oct 08 06:46:06 crc kubenswrapper[4958]: I1008 06:46:06.821691 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hfzs9" event={"ID":"0718b244-4835-4551-9013-6b3741845bb4","Type":"ContainerStarted","Data":"5221fcdc0621783a2a739c4aab871c369d46c024ae267fa3372c544207368f6f"} Oct 08 06:46:10 crc kubenswrapper[4958]: I1008 06:46:10.617520 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68469" Oct 08 06:46:15 crc kubenswrapper[4958]: I1008 06:46:15.576066 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:15 crc kubenswrapper[4958]: I1008 06:46:15.577137 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:15 crc kubenswrapper[4958]: I1008 06:46:15.906047 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hctrv"] Oct 08 06:46:15 crc kubenswrapper[4958]: I1008 06:46:15.923101 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 06:46:16 crc kubenswrapper[4958]: I1008 06:46:16.882056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hctrv" event={"ID":"bd93696c-c727-4b22-8426-cc909f7f64e5","Type":"ContainerStarted","Data":"246339fa3cc780ef2b515d4bd7ebbc90a5ad6a5ab0f38d754e71fa924910af0f"} Oct 08 06:46:17 crc kubenswrapper[4958]: I1008 06:46:17.891602 4958 generic.go:334] "Generic (PLEG): container finished" podID="bd93696c-c727-4b22-8426-cc909f7f64e5" containerID="c7667f49a6afbad24024a04e97f9b5e09300f61798ae1191ec05d430b7a84e5c" exitCode=0 Oct 08 06:46:17 crc kubenswrapper[4958]: I1008 06:46:17.891679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hctrv" event={"ID":"bd93696c-c727-4b22-8426-cc909f7f64e5","Type":"ContainerDied","Data":"c7667f49a6afbad24024a04e97f9b5e09300f61798ae1191ec05d430b7a84e5c"} Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.235886 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.336089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd93696c-c727-4b22-8426-cc909f7f64e5-crc-storage\") pod \"bd93696c-c727-4b22-8426-cc909f7f64e5\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.336247 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd93696c-c727-4b22-8426-cc909f7f64e5-node-mnt\") pod \"bd93696c-c727-4b22-8426-cc909f7f64e5\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.336307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c8zx\" (UniqueName: \"kubernetes.io/projected/bd93696c-c727-4b22-8426-cc909f7f64e5-kube-api-access-5c8zx\") pod \"bd93696c-c727-4b22-8426-cc909f7f64e5\" (UID: \"bd93696c-c727-4b22-8426-cc909f7f64e5\") " Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.336400 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd93696c-c727-4b22-8426-cc909f7f64e5-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "bd93696c-c727-4b22-8426-cc909f7f64e5" (UID: "bd93696c-c727-4b22-8426-cc909f7f64e5"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.336645 4958 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd93696c-c727-4b22-8426-cc909f7f64e5-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.344273 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd93696c-c727-4b22-8426-cc909f7f64e5-kube-api-access-5c8zx" (OuterVolumeSpecName: "kube-api-access-5c8zx") pod "bd93696c-c727-4b22-8426-cc909f7f64e5" (UID: "bd93696c-c727-4b22-8426-cc909f7f64e5"). InnerVolumeSpecName "kube-api-access-5c8zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.358659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd93696c-c727-4b22-8426-cc909f7f64e5-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "bd93696c-c727-4b22-8426-cc909f7f64e5" (UID: "bd93696c-c727-4b22-8426-cc909f7f64e5"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.437557 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c8zx\" (UniqueName: \"kubernetes.io/projected/bd93696c-c727-4b22-8426-cc909f7f64e5-kube-api-access-5c8zx\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.437619 4958 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd93696c-c727-4b22-8426-cc909f7f64e5-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.907560 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hctrv" event={"ID":"bd93696c-c727-4b22-8426-cc909f7f64e5","Type":"ContainerDied","Data":"246339fa3cc780ef2b515d4bd7ebbc90a5ad6a5ab0f38d754e71fa924910af0f"} Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.908019 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246339fa3cc780ef2b515d4bd7ebbc90a5ad6a5ab0f38d754e71fa924910af0f" Oct 08 06:46:19 crc kubenswrapper[4958]: I1008 06:46:19.907597 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hctrv" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.806459 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8"] Oct 08 06:46:27 crc kubenswrapper[4958]: E1008 06:46:27.807080 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" containerName="storage" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.807101 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" containerName="storage" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.807265 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" containerName="storage" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.808450 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.810296 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.817769 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8"] Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.956208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.956358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vhx\" (UniqueName: \"kubernetes.io/projected/71d1d831-4efc-496d-8876-419f604cd0c8-kube-api-access-67vhx\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:27 crc kubenswrapper[4958]: I1008 06:46:27.956504 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.057737 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vhx\" (UniqueName: \"kubernetes.io/projected/71d1d831-4efc-496d-8876-419f604cd0c8-kube-api-access-67vhx\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.057896 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.058053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.059196 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.059699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.090621 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vhx\" (UniqueName: \"kubernetes.io/projected/71d1d831-4efc-496d-8876-419f604cd0c8-kube-api-access-67vhx\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.126685 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.433983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8"] Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.985743 4958 generic.go:334] "Generic (PLEG): container finished" podID="71d1d831-4efc-496d-8876-419f604cd0c8" containerID="57124e85004d8c4e905bf19c15334284ce8a33c8b508403ea50e9399ded5342d" exitCode=0 Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.985817 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" event={"ID":"71d1d831-4efc-496d-8876-419f604cd0c8","Type":"ContainerDied","Data":"57124e85004d8c4e905bf19c15334284ce8a33c8b508403ea50e9399ded5342d"} Oct 08 06:46:28 crc kubenswrapper[4958]: I1008 06:46:28.986098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" event={"ID":"71d1d831-4efc-496d-8876-419f604cd0c8","Type":"ContainerStarted","Data":"009bd7bb69fc3b66873b585cbee77602262fb66a81e037ee22d177ad8f543b42"} Oct 08 06:46:30 crc kubenswrapper[4958]: I1008 06:46:30.998805 4958 generic.go:334] "Generic (PLEG): container finished" podID="71d1d831-4efc-496d-8876-419f604cd0c8" containerID="721a9f28f743a7e4d1487747129fa58d880afd23df1ef5ca87b942a500674f59" exitCode=0 Oct 08 06:46:30 crc kubenswrapper[4958]: I1008 06:46:30.998996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" event={"ID":"71d1d831-4efc-496d-8876-419f604cd0c8","Type":"ContainerDied","Data":"721a9f28f743a7e4d1487747129fa58d880afd23df1ef5ca87b942a500674f59"} Oct 08 06:46:32 crc kubenswrapper[4958]: I1008 06:46:32.005170 4958 generic.go:334] "Generic (PLEG): container finished" podID="71d1d831-4efc-496d-8876-419f604cd0c8" containerID="3623c92f4bd096aa029e41d8d9f8e3920cb47e4a71e1e7a51fd9e0f97ebfb616" exitCode=0 Oct 08 06:46:32 crc kubenswrapper[4958]: I1008 06:46:32.005219 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" event={"ID":"71d1d831-4efc-496d-8876-419f604cd0c8","Type":"ContainerDied","Data":"3623c92f4bd096aa029e41d8d9f8e3920cb47e4a71e1e7a51fd9e0f97ebfb616"} Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.398589 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.537016 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-util\") pod \"71d1d831-4efc-496d-8876-419f604cd0c8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.537227 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67vhx\" (UniqueName: \"kubernetes.io/projected/71d1d831-4efc-496d-8876-419f604cd0c8-kube-api-access-67vhx\") pod \"71d1d831-4efc-496d-8876-419f604cd0c8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.537276 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-bundle\") pod \"71d1d831-4efc-496d-8876-419f604cd0c8\" (UID: \"71d1d831-4efc-496d-8876-419f604cd0c8\") " Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.538383 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-bundle" (OuterVolumeSpecName: "bundle") pod "71d1d831-4efc-496d-8876-419f604cd0c8" (UID: "71d1d831-4efc-496d-8876-419f604cd0c8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.546333 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d1d831-4efc-496d-8876-419f604cd0c8-kube-api-access-67vhx" (OuterVolumeSpecName: "kube-api-access-67vhx") pod "71d1d831-4efc-496d-8876-419f604cd0c8" (UID: "71d1d831-4efc-496d-8876-419f604cd0c8"). InnerVolumeSpecName "kube-api-access-67vhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.564692 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-util" (OuterVolumeSpecName: "util") pod "71d1d831-4efc-496d-8876-419f604cd0c8" (UID: "71d1d831-4efc-496d-8876-419f604cd0c8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.639452 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67vhx\" (UniqueName: \"kubernetes.io/projected/71d1d831-4efc-496d-8876-419f604cd0c8-kube-api-access-67vhx\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.639730 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:33 crc kubenswrapper[4958]: I1008 06:46:33.639750 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71d1d831-4efc-496d-8876-419f604cd0c8-util\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:34 crc kubenswrapper[4958]: I1008 06:46:34.020696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" event={"ID":"71d1d831-4efc-496d-8876-419f604cd0c8","Type":"ContainerDied","Data":"009bd7bb69fc3b66873b585cbee77602262fb66a81e037ee22d177ad8f543b42"} Oct 08 06:46:34 crc kubenswrapper[4958]: I1008 06:46:34.020753 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="009bd7bb69fc3b66873b585cbee77602262fb66a81e037ee22d177ad8f543b42" Oct 08 06:46:34 crc kubenswrapper[4958]: I1008 06:46:34.020811 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.264938 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc"] Oct 08 06:46:37 crc kubenswrapper[4958]: E1008 06:46:37.265436 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="pull" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.265466 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="pull" Oct 08 06:46:37 crc kubenswrapper[4958]: E1008 06:46:37.265482 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="util" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.265487 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="util" Oct 08 06:46:37 crc kubenswrapper[4958]: E1008 06:46:37.265497 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="extract" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.265503 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="extract" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.265634 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d1d831-4efc-496d-8876-419f604cd0c8" containerName="extract" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.266073 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.267791 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.267813 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bf4kw" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.267910 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.286232 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc"] Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.389472 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457qb\" (UniqueName: \"kubernetes.io/projected/60067db5-d91c-42cb-b25e-13cc170a1a14-kube-api-access-457qb\") pod \"nmstate-operator-858ddd8f98-sdkfc\" (UID: \"60067db5-d91c-42cb-b25e-13cc170a1a14\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.490688 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-457qb\" (UniqueName: \"kubernetes.io/projected/60067db5-d91c-42cb-b25e-13cc170a1a14-kube-api-access-457qb\") pod \"nmstate-operator-858ddd8f98-sdkfc\" (UID: \"60067db5-d91c-42cb-b25e-13cc170a1a14\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.520380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-457qb\" (UniqueName: \"kubernetes.io/projected/60067db5-d91c-42cb-b25e-13cc170a1a14-kube-api-access-457qb\") pod \"nmstate-operator-858ddd8f98-sdkfc\" (UID: \"60067db5-d91c-42cb-b25e-13cc170a1a14\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.595906 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" Oct 08 06:46:37 crc kubenswrapper[4958]: I1008 06:46:37.883636 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc"] Oct 08 06:46:38 crc kubenswrapper[4958]: I1008 06:46:38.048605 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" event={"ID":"60067db5-d91c-42cb-b25e-13cc170a1a14","Type":"ContainerStarted","Data":"0555dbf395d108fd5516b83abae24465d1931e6b3eff73fe833cb3e27abe621c"} Oct 08 06:46:41 crc kubenswrapper[4958]: I1008 06:46:41.069340 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" event={"ID":"60067db5-d91c-42cb-b25e-13cc170a1a14","Type":"ContainerStarted","Data":"21f151f6d699fce3d380d5ffa6e0e0988088ba6ea89fc156f625d6338d4abd9d"} Oct 08 06:46:41 crc kubenswrapper[4958]: I1008 06:46:41.089485 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-sdkfc" podStartSLOduration=1.807358195 podStartE2EDuration="4.089466292s" podCreationTimestamp="2025-10-08 06:46:37 +0000 UTC" firstStartedPulling="2025-10-08 06:46:37.892912983 +0000 UTC m=+741.022605584" lastFinishedPulling="2025-10-08 06:46:40.17502105 +0000 UTC m=+743.304713681" observedRunningTime="2025-10-08 06:46:41.088320012 +0000 UTC m=+744.218012653" watchObservedRunningTime="2025-10-08 06:46:41.089466292 +0000 UTC m=+744.219158903" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.085533 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xkpn4"] Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.086207 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" podUID="b964df0d-9992-4be4-900f-68a7b74bceef" containerName="controller-manager" containerID="cri-o://6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4" gracePeriod=30 Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.193320 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t"] Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.193558 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" podUID="eae96488-906d-423e-ad79-b4448ef8ad58" containerName="route-controller-manager" containerID="cri-o://662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283" gracePeriod=30 Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.463659 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.523673 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570218 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-client-ca\") pod \"b964df0d-9992-4be4-900f-68a7b74bceef\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570267 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjwvw\" (UniqueName: \"kubernetes.io/projected/eae96488-906d-423e-ad79-b4448ef8ad58-kube-api-access-sjwvw\") pod \"eae96488-906d-423e-ad79-b4448ef8ad58\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570313 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-config\") pod \"b964df0d-9992-4be4-900f-68a7b74bceef\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570337 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvv45\" (UniqueName: \"kubernetes.io/projected/b964df0d-9992-4be4-900f-68a7b74bceef-kube-api-access-hvv45\") pod \"b964df0d-9992-4be4-900f-68a7b74bceef\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570377 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae96488-906d-423e-ad79-b4448ef8ad58-serving-cert\") pod \"eae96488-906d-423e-ad79-b4448ef8ad58\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570398 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-config\") pod \"eae96488-906d-423e-ad79-b4448ef8ad58\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570458 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b964df0d-9992-4be4-900f-68a7b74bceef-serving-cert\") pod \"b964df0d-9992-4be4-900f-68a7b74bceef\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570496 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-client-ca\") pod \"eae96488-906d-423e-ad79-b4448ef8ad58\" (UID: \"eae96488-906d-423e-ad79-b4448ef8ad58\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.570526 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-proxy-ca-bundles\") pod \"b964df0d-9992-4be4-900f-68a7b74bceef\" (UID: \"b964df0d-9992-4be4-900f-68a7b74bceef\") " Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.571287 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-config" (OuterVolumeSpecName: "config") pod "b964df0d-9992-4be4-900f-68a7b74bceef" (UID: "b964df0d-9992-4be4-900f-68a7b74bceef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.571312 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b964df0d-9992-4be4-900f-68a7b74bceef" (UID: "b964df0d-9992-4be4-900f-68a7b74bceef"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.571732 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-client-ca" (OuterVolumeSpecName: "client-ca") pod "eae96488-906d-423e-ad79-b4448ef8ad58" (UID: "eae96488-906d-423e-ad79-b4448ef8ad58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.572007 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-client-ca" (OuterVolumeSpecName: "client-ca") pod "b964df0d-9992-4be4-900f-68a7b74bceef" (UID: "b964df0d-9992-4be4-900f-68a7b74bceef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.572244 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-config" (OuterVolumeSpecName: "config") pod "eae96488-906d-423e-ad79-b4448ef8ad58" (UID: "eae96488-906d-423e-ad79-b4448ef8ad58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.577485 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b964df0d-9992-4be4-900f-68a7b74bceef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b964df0d-9992-4be4-900f-68a7b74bceef" (UID: "b964df0d-9992-4be4-900f-68a7b74bceef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.577934 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae96488-906d-423e-ad79-b4448ef8ad58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eae96488-906d-423e-ad79-b4448ef8ad58" (UID: "eae96488-906d-423e-ad79-b4448ef8ad58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.578768 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae96488-906d-423e-ad79-b4448ef8ad58-kube-api-access-sjwvw" (OuterVolumeSpecName: "kube-api-access-sjwvw") pod "eae96488-906d-423e-ad79-b4448ef8ad58" (UID: "eae96488-906d-423e-ad79-b4448ef8ad58"). InnerVolumeSpecName "kube-api-access-sjwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.586199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b964df0d-9992-4be4-900f-68a7b74bceef-kube-api-access-hvv45" (OuterVolumeSpecName: "kube-api-access-hvv45") pod "b964df0d-9992-4be4-900f-68a7b74bceef" (UID: "b964df0d-9992-4be4-900f-68a7b74bceef"). InnerVolumeSpecName "kube-api-access-hvv45". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672527 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae96488-906d-423e-ad79-b4448ef8ad58-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672584 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672605 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b964df0d-9992-4be4-900f-68a7b74bceef-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672624 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae96488-906d-423e-ad79-b4448ef8ad58-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672641 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672659 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672678 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjwvw\" (UniqueName: \"kubernetes.io/projected/eae96488-906d-423e-ad79-b4448ef8ad58-kube-api-access-sjwvw\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672697 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b964df0d-9992-4be4-900f-68a7b74bceef-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:42 crc kubenswrapper[4958]: I1008 06:46:42.672714 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvv45\" (UniqueName: \"kubernetes.io/projected/b964df0d-9992-4be4-900f-68a7b74bceef-kube-api-access-hvv45\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.085913 4958 generic.go:334] "Generic (PLEG): container finished" podID="b964df0d-9992-4be4-900f-68a7b74bceef" containerID="6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4" exitCode=0 Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.086063 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.086090 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" event={"ID":"b964df0d-9992-4be4-900f-68a7b74bceef","Type":"ContainerDied","Data":"6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4"} Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.086181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xkpn4" event={"ID":"b964df0d-9992-4be4-900f-68a7b74bceef","Type":"ContainerDied","Data":"7137291a9d436ba7bfcb3f49a1a066865a8fa4480ad077201b08a5ae9496ff9d"} Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.086238 4958 scope.go:117] "RemoveContainer" containerID="6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.088666 4958 generic.go:334] "Generic (PLEG): container finished" podID="eae96488-906d-423e-ad79-b4448ef8ad58" containerID="662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283" exitCode=0 Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.088712 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" event={"ID":"eae96488-906d-423e-ad79-b4448ef8ad58","Type":"ContainerDied","Data":"662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283"} Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.088753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" event={"ID":"eae96488-906d-423e-ad79-b4448ef8ad58","Type":"ContainerDied","Data":"6164f9e880ecc163e7b010d71c2c9610992830463584f88730ec333ba035bea1"} Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.088767 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.112445 4958 scope.go:117] "RemoveContainer" containerID="6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4" Oct 08 06:46:43 crc kubenswrapper[4958]: E1008 06:46:43.113058 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4\": container with ID starting with 6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4 not found: ID does not exist" containerID="6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.113126 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4"} err="failed to get container status \"6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4\": rpc error: code = NotFound desc = could not find container \"6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4\": container with ID starting with 6e068da86377f4b958fdb7fcf05a17fcded6690783d2523bd1d0ed341fee2ad4 not found: ID does not exist" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.113167 4958 scope.go:117] "RemoveContainer" containerID="662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.150724 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xkpn4"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.150819 4958 scope.go:117] "RemoveContainer" containerID="662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283" Oct 08 06:46:43 crc kubenswrapper[4958]: E1008 06:46:43.151607 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283\": container with ID starting with 662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283 not found: ID does not exist" containerID="662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.151655 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283"} err="failed to get container status \"662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283\": rpc error: code = NotFound desc = could not find container \"662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283\": container with ID starting with 662146ad9288e7371a2f192d19cba45ac7d7f01feeb2cdbd28cc45f1ab6ae283 not found: ID does not exist" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.162409 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xkpn4"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.166411 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.170293 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7sb7t"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.307322 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-684795f45b-qrrq7"] Oct 08 06:46:43 crc kubenswrapper[4958]: E1008 06:46:43.307690 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae96488-906d-423e-ad79-b4448ef8ad58" containerName="route-controller-manager" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.307718 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae96488-906d-423e-ad79-b4448ef8ad58" containerName="route-controller-manager" Oct 08 06:46:43 crc kubenswrapper[4958]: E1008 06:46:43.307743 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b964df0d-9992-4be4-900f-68a7b74bceef" containerName="controller-manager" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.307756 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b964df0d-9992-4be4-900f-68a7b74bceef" containerName="controller-manager" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.307926 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b964df0d-9992-4be4-900f-68a7b74bceef" containerName="controller-manager" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.308034 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae96488-906d-423e-ad79-b4448ef8ad58" containerName="route-controller-manager" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.308784 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.312330 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.312899 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.314706 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.314727 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.315267 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.317837 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.319375 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.333788 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.340069 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.340615 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.340648 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.341798 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.342916 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.347483 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.365183 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.370634 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-684795f45b-qrrq7"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.372846 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/932c6cbf-77fe-41ba-bbea-b508205eb0ae-kube-api-access-ftw9f\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381107 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-proxy-ca-bundles\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-serving-cert\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-config\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381232 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-client-ca\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381257 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-client-ca\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglbc\" (UniqueName: \"kubernetes.io/projected/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-kube-api-access-gglbc\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381320 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932c6cbf-77fe-41ba-bbea-b508205eb0ae-serving-cert\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.381423 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-config\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.482927 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-client-ca\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-client-ca\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484354 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gglbc\" (UniqueName: \"kubernetes.io/projected/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-kube-api-access-gglbc\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484448 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932c6cbf-77fe-41ba-bbea-b508205eb0ae-serving-cert\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484495 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-config\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484538 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-client-ca\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484705 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/932c6cbf-77fe-41ba-bbea-b508205eb0ae-kube-api-access-ftw9f\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-proxy-ca-bundles\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484819 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-serving-cert\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.484881 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-config\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.485884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-proxy-ca-bundles\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.486499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-config\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.486547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-client-ca\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.487533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-config\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.491691 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-serving-cert\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.494157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932c6cbf-77fe-41ba-bbea-b508205eb0ae-serving-cert\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.514469 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gglbc\" (UniqueName: \"kubernetes.io/projected/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-kube-api-access-gglbc\") pod \"route-controller-manager-787ff674f6-z799m\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.517921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/932c6cbf-77fe-41ba-bbea-b508205eb0ae-kube-api-access-ftw9f\") pod \"controller-manager-684795f45b-qrrq7\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.587160 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b964df0d-9992-4be4-900f-68a7b74bceef" path="/var/lib/kubelet/pods/b964df0d-9992-4be4-900f-68a7b74bceef/volumes" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.587842 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae96488-906d-423e-ad79-b4448ef8ad58" path="/var/lib/kubelet/pods/eae96488-906d-423e-ad79-b4448ef8ad58/volumes" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.654648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.663071 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-684795f45b-qrrq7"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.678733 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.680400 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.894529 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m"] Oct 08 06:46:43 crc kubenswrapper[4958]: I1008 06:46:43.941714 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-684795f45b-qrrq7"] Oct 08 06:46:43 crc kubenswrapper[4958]: W1008 06:46:43.951032 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod932c6cbf_77fe_41ba_bbea_b508205eb0ae.slice/crio-f86637938b26d2c4157a99142e5f03c404c5a43d678309de7232eac527e562ef WatchSource:0}: Error finding container f86637938b26d2c4157a99142e5f03c404c5a43d678309de7232eac527e562ef: Status 404 returned error can't find the container with id f86637938b26d2c4157a99142e5f03c404c5a43d678309de7232eac527e562ef Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.098472 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" event={"ID":"932c6cbf-77fe-41ba-bbea-b508205eb0ae","Type":"ContainerStarted","Data":"f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379"} Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.098531 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" podUID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" containerName="controller-manager" containerID="cri-o://f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379" gracePeriod=30 Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.098544 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" event={"ID":"932c6cbf-77fe-41ba-bbea-b508205eb0ae","Type":"ContainerStarted","Data":"f86637938b26d2c4157a99142e5f03c404c5a43d678309de7232eac527e562ef"} Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.098665 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.100496 4958 patch_prober.go:28] interesting pod/controller-manager-684795f45b-qrrq7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.100541 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" podUID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.104200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" event={"ID":"aebc8732-f83f-4ee5-9b8f-8795e0d5340d","Type":"ContainerStarted","Data":"93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808"} Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.104484 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.104495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" event={"ID":"aebc8732-f83f-4ee5-9b8f-8795e0d5340d","Type":"ContainerStarted","Data":"25fed836167c43f4c20be338e76f963a6ec68aee0dad1dd646962f8b220217d4"} Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.104313 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" podUID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" containerName="route-controller-manager" containerID="cri-o://93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808" gracePeriod=30 Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.108028 4958 patch_prober.go:28] interesting pod/route-controller-manager-787ff674f6-z799m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.108053 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" podUID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.122869 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" podStartSLOduration=2.122850581 podStartE2EDuration="2.122850581s" podCreationTimestamp="2025-10-08 06:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:46:44.121804554 +0000 UTC m=+747.251497165" watchObservedRunningTime="2025-10-08 06:46:44.122850581 +0000 UTC m=+747.252543192" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.136275 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" podStartSLOduration=2.136253796 podStartE2EDuration="2.136253796s" podCreationTimestamp="2025-10-08 06:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:46:44.1360221 +0000 UTC m=+747.265714721" watchObservedRunningTime="2025-10-08 06:46:44.136253796 +0000 UTC m=+747.265946427" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.454868 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-787ff674f6-z799m_aebc8732-f83f-4ee5-9b8f-8795e0d5340d/route-controller-manager/0.log" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.455027 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.499749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-config\") pod \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.499872 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-client-ca\") pod \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.499934 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-serving-cert\") pod \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.500167 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gglbc\" (UniqueName: \"kubernetes.io/projected/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-kube-api-access-gglbc\") pod \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\" (UID: \"aebc8732-f83f-4ee5-9b8f-8795e0d5340d\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.501042 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-config" (OuterVolumeSpecName: "config") pod "aebc8732-f83f-4ee5-9b8f-8795e0d5340d" (UID: "aebc8732-f83f-4ee5-9b8f-8795e0d5340d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.501116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-client-ca" (OuterVolumeSpecName: "client-ca") pod "aebc8732-f83f-4ee5-9b8f-8795e0d5340d" (UID: "aebc8732-f83f-4ee5-9b8f-8795e0d5340d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.505714 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-kube-api-access-gglbc" (OuterVolumeSpecName: "kube-api-access-gglbc") pod "aebc8732-f83f-4ee5-9b8f-8795e0d5340d" (UID: "aebc8732-f83f-4ee5-9b8f-8795e0d5340d"). InnerVolumeSpecName "kube-api-access-gglbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.506920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aebc8732-f83f-4ee5-9b8f-8795e0d5340d" (UID: "aebc8732-f83f-4ee5-9b8f-8795e0d5340d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.542571 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-684795f45b-qrrq7_932c6cbf-77fe-41ba-bbea-b508205eb0ae/controller-manager/0.log" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.542667 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.600990 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-client-ca\") pod \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601059 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/932c6cbf-77fe-41ba-bbea-b508205eb0ae-kube-api-access-ftw9f\") pod \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601082 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932c6cbf-77fe-41ba-bbea-b508205eb0ae-serving-cert\") pod \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-proxy-ca-bundles\") pod \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-config\") pod \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\" (UID: \"932c6cbf-77fe-41ba-bbea-b508205eb0ae\") " Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601357 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601368 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601378 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gglbc\" (UniqueName: \"kubernetes.io/projected/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-kube-api-access-gglbc\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.601386 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aebc8732-f83f-4ee5-9b8f-8795e0d5340d-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.602271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-config" (OuterVolumeSpecName: "config") pod "932c6cbf-77fe-41ba-bbea-b508205eb0ae" (UID: "932c6cbf-77fe-41ba-bbea-b508205eb0ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.602506 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "932c6cbf-77fe-41ba-bbea-b508205eb0ae" (UID: "932c6cbf-77fe-41ba-bbea-b508205eb0ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.604017 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "932c6cbf-77fe-41ba-bbea-b508205eb0ae" (UID: "932c6cbf-77fe-41ba-bbea-b508205eb0ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.608450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/932c6cbf-77fe-41ba-bbea-b508205eb0ae-kube-api-access-ftw9f" (OuterVolumeSpecName: "kube-api-access-ftw9f") pod "932c6cbf-77fe-41ba-bbea-b508205eb0ae" (UID: "932c6cbf-77fe-41ba-bbea-b508205eb0ae"). InnerVolumeSpecName "kube-api-access-ftw9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.608548 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/932c6cbf-77fe-41ba-bbea-b508205eb0ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "932c6cbf-77fe-41ba-bbea-b508205eb0ae" (UID: "932c6cbf-77fe-41ba-bbea-b508205eb0ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.702421 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-client-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.702514 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/932c6cbf-77fe-41ba-bbea-b508205eb0ae-kube-api-access-ftw9f\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.702538 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/932c6cbf-77fe-41ba-bbea-b508205eb0ae-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.702557 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:44 crc kubenswrapper[4958]: I1008 06:46:44.702575 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932c6cbf-77fe-41ba-bbea-b508205eb0ae-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.117498 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-787ff674f6-z799m_aebc8732-f83f-4ee5-9b8f-8795e0d5340d/route-controller-manager/0.log" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.117547 4958 generic.go:334] "Generic (PLEG): container finished" podID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" containerID="93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808" exitCode=2 Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.117607 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" event={"ID":"aebc8732-f83f-4ee5-9b8f-8795e0d5340d","Type":"ContainerDied","Data":"93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808"} Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.117632 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" event={"ID":"aebc8732-f83f-4ee5-9b8f-8795e0d5340d","Type":"ContainerDied","Data":"25fed836167c43f4c20be338e76f963a6ec68aee0dad1dd646962f8b220217d4"} Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.117649 4958 scope.go:117] "RemoveContainer" containerID="93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.117646 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.122368 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-684795f45b-qrrq7_932c6cbf-77fe-41ba-bbea-b508205eb0ae/controller-manager/0.log" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.122399 4958 generic.go:334] "Generic (PLEG): container finished" podID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" containerID="f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379" exitCode=2 Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.122415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" event={"ID":"932c6cbf-77fe-41ba-bbea-b508205eb0ae","Type":"ContainerDied","Data":"f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379"} Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.122431 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" event={"ID":"932c6cbf-77fe-41ba-bbea-b508205eb0ae","Type":"ContainerDied","Data":"f86637938b26d2c4157a99142e5f03c404c5a43d678309de7232eac527e562ef"} Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.122481 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-684795f45b-qrrq7" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.145901 4958 scope.go:117] "RemoveContainer" containerID="93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808" Oct 08 06:46:45 crc kubenswrapper[4958]: E1008 06:46:45.146486 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808\": container with ID starting with 93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808 not found: ID does not exist" containerID="93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.146563 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808"} err="failed to get container status \"93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808\": rpc error: code = NotFound desc = could not find container \"93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808\": container with ID starting with 93a6b8d858477dcfbd7ff98dfc3eec9b78697bc5a06212b3957cc9dc2defa808 not found: ID does not exist" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.146605 4958 scope.go:117] "RemoveContainer" containerID="f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.160876 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-684795f45b-qrrq7"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.168252 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-684795f45b-qrrq7"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.190426 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.190572 4958 scope.go:117] "RemoveContainer" containerID="f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379" Oct 08 06:46:45 crc kubenswrapper[4958]: E1008 06:46:45.191744 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379\": container with ID starting with f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379 not found: ID does not exist" containerID="f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.191822 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379"} err="failed to get container status \"f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379\": rpc error: code = NotFound desc = could not find container \"f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379\": container with ID starting with f9f65cbb93e0773bd760234a4966b43ac3122ab4fdb928b25a08771da41a7379 not found: ID does not exist" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.193915 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787ff674f6-z799m"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.308823 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg"] Oct 08 06:46:45 crc kubenswrapper[4958]: E1008 06:46:45.309240 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" containerName="route-controller-manager" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.309269 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" containerName="route-controller-manager" Oct 08 06:46:45 crc kubenswrapper[4958]: E1008 06:46:45.309309 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" containerName="controller-manager" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.309322 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" containerName="controller-manager" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.309478 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" containerName="controller-manager" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.309509 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" containerName="route-controller-manager" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.310123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.314661 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.315075 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.315264 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.315282 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.315679 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.315908 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.315678 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57948b45ff-9t5wc"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.317114 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.323896 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.324579 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.325166 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.325373 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.325424 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.325234 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.343079 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.345579 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.352464 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57948b45ff-9t5wc"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.515879 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79c8f991-d140-4ba1-a91e-fa71c4539e64-config\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z94ws\" (UniqueName: \"kubernetes.io/projected/b9123582-9e2b-4038-9c2b-0a1d3a370736-kube-api-access-z94ws\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-client-ca\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516153 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-config\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516249 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79c8f991-d140-4ba1-a91e-fa71c4539e64-client-ca\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516317 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79c8f991-d140-4ba1-a91e-fa71c4539e64-serving-cert\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516353 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-proxy-ca-bundles\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516427 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9123582-9e2b-4038-9c2b-0a1d3a370736-serving-cert\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.516459 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5s78\" (UniqueName: \"kubernetes.io/projected/79c8f991-d140-4ba1-a91e-fa71c4539e64-kube-api-access-b5s78\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.596931 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="932c6cbf-77fe-41ba-bbea-b508205eb0ae" path="/var/lib/kubelet/pods/932c6cbf-77fe-41ba-bbea-b508205eb0ae/volumes" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.599802 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aebc8732-f83f-4ee5-9b8f-8795e0d5340d" path="/var/lib/kubelet/pods/aebc8732-f83f-4ee5-9b8f-8795e0d5340d/volumes" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617438 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79c8f991-d140-4ba1-a91e-fa71c4539e64-serving-cert\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-proxy-ca-bundles\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9123582-9e2b-4038-9c2b-0a1d3a370736-serving-cert\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5s78\" (UniqueName: \"kubernetes.io/projected/79c8f991-d140-4ba1-a91e-fa71c4539e64-kube-api-access-b5s78\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617757 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79c8f991-d140-4ba1-a91e-fa71c4539e64-config\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617821 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z94ws\" (UniqueName: \"kubernetes.io/projected/b9123582-9e2b-4038-9c2b-0a1d3a370736-kube-api-access-z94ws\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.617865 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-client-ca\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.618002 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-config\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.618042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79c8f991-d140-4ba1-a91e-fa71c4539e64-client-ca\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.619527 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-proxy-ca-bundles\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.619615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79c8f991-d140-4ba1-a91e-fa71c4539e64-client-ca\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.620689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-client-ca\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.620978 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79c8f991-d140-4ba1-a91e-fa71c4539e64-config\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.621081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9123582-9e2b-4038-9c2b-0a1d3a370736-config\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.626475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79c8f991-d140-4ba1-a91e-fa71c4539e64-serving-cert\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.637935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9123582-9e2b-4038-9c2b-0a1d3a370736-serving-cert\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.648713 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5s78\" (UniqueName: \"kubernetes.io/projected/79c8f991-d140-4ba1-a91e-fa71c4539e64-kube-api-access-b5s78\") pod \"route-controller-manager-749b47ff6f-zhmrg\" (UID: \"79c8f991-d140-4ba1-a91e-fa71c4539e64\") " pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.649070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z94ws\" (UniqueName: \"kubernetes.io/projected/b9123582-9e2b-4038-9c2b-0a1d3a370736-kube-api-access-z94ws\") pod \"controller-manager-57948b45ff-9t5wc\" (UID: \"b9123582-9e2b-4038-9c2b-0a1d3a370736\") " pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.665250 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.874805 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57948b45ff-9t5wc"] Oct 08 06:46:45 crc kubenswrapper[4958]: I1008 06:46:45.948489 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:46 crc kubenswrapper[4958]: I1008 06:46:46.144566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" event={"ID":"b9123582-9e2b-4038-9c2b-0a1d3a370736","Type":"ContainerStarted","Data":"e04b26a2c739226d14745927ad89f003f20be60ec74685f3633e26996dded51c"} Oct 08 06:46:46 crc kubenswrapper[4958]: I1008 06:46:46.144627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" event={"ID":"b9123582-9e2b-4038-9c2b-0a1d3a370736","Type":"ContainerStarted","Data":"46f97352be942f6354bdf3ebb901d63c0a449eed56753cb324a52f2a60d6eed6"} Oct 08 06:46:46 crc kubenswrapper[4958]: I1008 06:46:46.144864 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:46 crc kubenswrapper[4958]: I1008 06:46:46.156646 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" Oct 08 06:46:46 crc kubenswrapper[4958]: W1008 06:46:46.185222 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79c8f991_d140_4ba1_a91e_fa71c4539e64.slice/crio-2ce664d2bb8cd2d14efd11902433a81e93359ac1eb996f24362dcc8dbbe46143 WatchSource:0}: Error finding container 2ce664d2bb8cd2d14efd11902433a81e93359ac1eb996f24362dcc8dbbe46143: Status 404 returned error can't find the container with id 2ce664d2bb8cd2d14efd11902433a81e93359ac1eb996f24362dcc8dbbe46143 Oct 08 06:46:46 crc kubenswrapper[4958]: I1008 06:46:46.185458 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg"] Oct 08 06:46:46 crc kubenswrapper[4958]: I1008 06:46:46.186266 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57948b45ff-9t5wc" podStartSLOduration=3.186245607 podStartE2EDuration="3.186245607s" podCreationTimestamp="2025-10-08 06:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:46:46.168610611 +0000 UTC m=+749.298303212" watchObservedRunningTime="2025-10-08 06:46:46.186245607 +0000 UTC m=+749.315938208" Oct 08 06:46:47 crc kubenswrapper[4958]: I1008 06:46:47.156863 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" event={"ID":"79c8f991-d140-4ba1-a91e-fa71c4539e64","Type":"ContainerStarted","Data":"db75b433cc6e6aaaa356d2cb5e35699638d744c6f3aaf98e4909ec3a7071cd24"} Oct 08 06:46:47 crc kubenswrapper[4958]: I1008 06:46:47.157928 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" event={"ID":"79c8f991-d140-4ba1-a91e-fa71c4539e64","Type":"ContainerStarted","Data":"2ce664d2bb8cd2d14efd11902433a81e93359ac1eb996f24362dcc8dbbe46143"} Oct 08 06:46:47 crc kubenswrapper[4958]: I1008 06:46:47.182566 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" podStartSLOduration=4.182539558 podStartE2EDuration="4.182539558s" podCreationTimestamp="2025-10-08 06:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:46:47.180615877 +0000 UTC m=+750.310308518" watchObservedRunningTime="2025-10-08 06:46:47.182539558 +0000 UTC m=+750.312232159" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.163654 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.170384 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-749b47ff6f-zhmrg" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.488331 4958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.575460 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.576900 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.589919 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bntms" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.602706 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.607168 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.607804 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.611452 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.630052 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-t28hq"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.630689 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.640258 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.734779 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.735376 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.736787 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.737611 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-w4trf" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.738243 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4g4\" (UniqueName: \"kubernetes.io/projected/b2433751-371a-4030-94ff-aff641121a0a-kube-api-access-qf4g4\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766116 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62c29\" (UniqueName: \"kubernetes.io/projected/87cfba69-24de-4e63-8132-80091c6cdd43-kube-api-access-62c29\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766136 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d38d6806-1278-4fa5-9a83-2a8adae79a2c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766151 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvr7\" (UniqueName: \"kubernetes.io/projected/d38d6806-1278-4fa5-9a83-2a8adae79a2c-kube-api-access-hlvr7\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766168 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-nmstate-lock\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766186 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2433751-371a-4030-94ff-aff641121a0a-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b2433751-371a-4030-94ff-aff641121a0a-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-ovs-socket\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766237 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2tg\" (UniqueName: \"kubernetes.io/projected/0e3927d1-6c4c-4758-ba2d-455aea8dd388-kube-api-access-vn2tg\") pod \"nmstate-metrics-fdff9cb8d-ft9jc\" (UID: \"0e3927d1-6c4c-4758-ba2d-455aea8dd388\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.766281 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-dbus-socket\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.775695 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n"] Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867190 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62c29\" (UniqueName: \"kubernetes.io/projected/87cfba69-24de-4e63-8132-80091c6cdd43-kube-api-access-62c29\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867229 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d38d6806-1278-4fa5-9a83-2a8adae79a2c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867244 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvr7\" (UniqueName: \"kubernetes.io/projected/d38d6806-1278-4fa5-9a83-2a8adae79a2c-kube-api-access-hlvr7\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-nmstate-lock\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867279 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b2433751-371a-4030-94ff-aff641121a0a-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867293 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2433751-371a-4030-94ff-aff641121a0a-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-ovs-socket\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867326 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn2tg\" (UniqueName: \"kubernetes.io/projected/0e3927d1-6c4c-4758-ba2d-455aea8dd388-kube-api-access-vn2tg\") pod \"nmstate-metrics-fdff9cb8d-ft9jc\" (UID: \"0e3927d1-6c4c-4758-ba2d-455aea8dd388\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-dbus-socket\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.867383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4g4\" (UniqueName: \"kubernetes.io/projected/b2433751-371a-4030-94ff-aff641121a0a-kube-api-access-qf4g4\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: E1008 06:46:48.867770 4958 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 08 06:46:48 crc kubenswrapper[4958]: E1008 06:46:48.867807 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d38d6806-1278-4fa5-9a83-2a8adae79a2c-tls-key-pair podName:d38d6806-1278-4fa5-9a83-2a8adae79a2c nodeName:}" failed. No retries permitted until 2025-10-08 06:46:49.36779333 +0000 UTC m=+752.497485931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d38d6806-1278-4fa5-9a83-2a8adae79a2c-tls-key-pair") pod "nmstate-webhook-6cdbc54649-kxhxj" (UID: "d38d6806-1278-4fa5-9a83-2a8adae79a2c") : secret "openshift-nmstate-webhook" not found Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.868043 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-nmstate-lock\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: E1008 06:46:48.868352 4958 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 08 06:46:48 crc kubenswrapper[4958]: E1008 06:46:48.868387 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2433751-371a-4030-94ff-aff641121a0a-plugin-serving-cert podName:b2433751-371a-4030-94ff-aff641121a0a nodeName:}" failed. No retries permitted until 2025-10-08 06:46:49.368377506 +0000 UTC m=+752.498070107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b2433751-371a-4030-94ff-aff641121a0a-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-f4z6n" (UID: "b2433751-371a-4030-94ff-aff641121a0a") : secret "plugin-serving-cert" not found Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.868411 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-ovs-socket\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.868579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/87cfba69-24de-4e63-8132-80091c6cdd43-dbus-socket\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.868713 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b2433751-371a-4030-94ff-aff641121a0a-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.885250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4g4\" (UniqueName: \"kubernetes.io/projected/b2433751-371a-4030-94ff-aff641121a0a-kube-api-access-qf4g4\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.888658 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62c29\" (UniqueName: \"kubernetes.io/projected/87cfba69-24de-4e63-8132-80091c6cdd43-kube-api-access-62c29\") pod \"nmstate-handler-t28hq\" (UID: \"87cfba69-24de-4e63-8132-80091c6cdd43\") " pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.889653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvr7\" (UniqueName: \"kubernetes.io/projected/d38d6806-1278-4fa5-9a83-2a8adae79a2c-kube-api-access-hlvr7\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.894532 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn2tg\" (UniqueName: \"kubernetes.io/projected/0e3927d1-6c4c-4758-ba2d-455aea8dd388-kube-api-access-vn2tg\") pod \"nmstate-metrics-fdff9cb8d-ft9jc\" (UID: \"0e3927d1-6c4c-4758-ba2d-455aea8dd388\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.901332 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" Oct 08 06:46:48 crc kubenswrapper[4958]: I1008 06:46:48.953125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.169564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-t28hq" event={"ID":"87cfba69-24de-4e63-8132-80091c6cdd43","Type":"ContainerStarted","Data":"978da185641f3589a8efc5a6f0e6f0b72ec40cbe744d42f40d2b230bcffe8624"} Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.331984 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc"] Oct 08 06:46:49 crc kubenswrapper[4958]: W1008 06:46:49.338024 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3927d1_6c4c_4758_ba2d_455aea8dd388.slice/crio-3e35e50acb40842365b669c8fde93e70cfda03178b33d7b1390cb8dad581d3dd WatchSource:0}: Error finding container 3e35e50acb40842365b669c8fde93e70cfda03178b33d7b1390cb8dad581d3dd: Status 404 returned error can't find the container with id 3e35e50acb40842365b669c8fde93e70cfda03178b33d7b1390cb8dad581d3dd Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.374829 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d38d6806-1278-4fa5-9a83-2a8adae79a2c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.374887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2433751-371a-4030-94ff-aff641121a0a-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.381079 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b2433751-371a-4030-94ff-aff641121a0a-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-f4z6n\" (UID: \"b2433751-371a-4030-94ff-aff641121a0a\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.381331 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d38d6806-1278-4fa5-9a83-2a8adae79a2c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-kxhxj\" (UID: \"d38d6806-1278-4fa5-9a83-2a8adae79a2c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.433566 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bb7859b9b-7pn27"] Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.434624 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.451353 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb7859b9b-7pn27"] Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.519116 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.577087 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-serving-cert\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.577142 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-trusted-ca-bundle\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.577816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-service-ca\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.577903 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-oauth-config\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.578054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2brx\" (UniqueName: \"kubernetes.io/projected/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-kube-api-access-h2brx\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.578102 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-config\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.578199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-oauth-serving-cert\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.647368 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-serving-cert\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679271 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-trusted-ca-bundle\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-service-ca\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679380 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-oauth-config\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2brx\" (UniqueName: \"kubernetes.io/projected/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-kube-api-access-h2brx\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679484 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-config\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.679533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-oauth-serving-cert\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.680930 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-oauth-serving-cert\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.683712 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-trusted-ca-bundle\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.686269 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-oauth-config\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.687148 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-config\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.688614 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-service-ca\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.710916 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-console-serving-cert\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.715634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2brx\" (UniqueName: \"kubernetes.io/projected/2d005a3e-6c99-44fa-b75d-cbe66a9f4190-kube-api-access-h2brx\") pod \"console-7bb7859b9b-7pn27\" (UID: \"2d005a3e-6c99-44fa-b75d-cbe66a9f4190\") " pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:49 crc kubenswrapper[4958]: I1008 06:46:49.785624 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:50 crc kubenswrapper[4958]: I1008 06:46:50.002738 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj"] Oct 08 06:46:50 crc kubenswrapper[4958]: I1008 06:46:50.124768 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n"] Oct 08 06:46:50 crc kubenswrapper[4958]: W1008 06:46:50.133301 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2433751_371a_4030_94ff_aff641121a0a.slice/crio-82cff0a7e052a4c41e5091d26c552a9be2df8bf320a22dc5587456076870abea WatchSource:0}: Error finding container 82cff0a7e052a4c41e5091d26c552a9be2df8bf320a22dc5587456076870abea: Status 404 returned error can't find the container with id 82cff0a7e052a4c41e5091d26c552a9be2df8bf320a22dc5587456076870abea Oct 08 06:46:50 crc kubenswrapper[4958]: I1008 06:46:50.179417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" event={"ID":"b2433751-371a-4030-94ff-aff641121a0a","Type":"ContainerStarted","Data":"82cff0a7e052a4c41e5091d26c552a9be2df8bf320a22dc5587456076870abea"} Oct 08 06:46:50 crc kubenswrapper[4958]: I1008 06:46:50.180707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" event={"ID":"d38d6806-1278-4fa5-9a83-2a8adae79a2c","Type":"ContainerStarted","Data":"f948559f143eae5e54072582e767be0e730639db59e7214aa45689726d9fea76"} Oct 08 06:46:50 crc kubenswrapper[4958]: I1008 06:46:50.183625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" event={"ID":"0e3927d1-6c4c-4758-ba2d-455aea8dd388","Type":"ContainerStarted","Data":"3e35e50acb40842365b669c8fde93e70cfda03178b33d7b1390cb8dad581d3dd"} Oct 08 06:46:50 crc kubenswrapper[4958]: I1008 06:46:50.203328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb7859b9b-7pn27"] Oct 08 06:46:50 crc kubenswrapper[4958]: W1008 06:46:50.214686 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d005a3e_6c99_44fa_b75d_cbe66a9f4190.slice/crio-b899d68206a4c902242293d2863f38a29744f987b44ed736a50432f01e4bb96a WatchSource:0}: Error finding container b899d68206a4c902242293d2863f38a29744f987b44ed736a50432f01e4bb96a: Status 404 returned error can't find the container with id b899d68206a4c902242293d2863f38a29744f987b44ed736a50432f01e4bb96a Oct 08 06:46:51 crc kubenswrapper[4958]: I1008 06:46:51.190510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb7859b9b-7pn27" event={"ID":"2d005a3e-6c99-44fa-b75d-cbe66a9f4190","Type":"ContainerStarted","Data":"f9087f4833b6ce238291aef6ee5ea9d829dfc419868433a03584de78cc72a774"} Oct 08 06:46:51 crc kubenswrapper[4958]: I1008 06:46:51.191618 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb7859b9b-7pn27" event={"ID":"2d005a3e-6c99-44fa-b75d-cbe66a9f4190","Type":"ContainerStarted","Data":"b899d68206a4c902242293d2863f38a29744f987b44ed736a50432f01e4bb96a"} Oct 08 06:46:51 crc kubenswrapper[4958]: I1008 06:46:51.215480 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bb7859b9b-7pn27" podStartSLOduration=2.215328259 podStartE2EDuration="2.215328259s" podCreationTimestamp="2025-10-08 06:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:46:51.20744368 +0000 UTC m=+754.337136281" watchObservedRunningTime="2025-10-08 06:46:51.215328259 +0000 UTC m=+754.345020860" Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.199016 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" event={"ID":"d38d6806-1278-4fa5-9a83-2a8adae79a2c","Type":"ContainerStarted","Data":"ba24e55604c9bb50a573ba5b4959c30aa806a57333ea6709d3b65c18da21cc98"} Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.199499 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.201999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-t28hq" event={"ID":"87cfba69-24de-4e63-8132-80091c6cdd43","Type":"ContainerStarted","Data":"cc18b272263fdadba8c3942adfde2e8f18129b343e48d94418128e7e3b5e1406"} Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.202208 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.209133 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" event={"ID":"0e3927d1-6c4c-4758-ba2d-455aea8dd388","Type":"ContainerStarted","Data":"7bc78f5b7befc7bb71a85f47fbd1e0a84645b416bbaf961b17253967c01acb1c"} Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.219177 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" podStartSLOduration=2.756605974 podStartE2EDuration="4.219154079s" podCreationTimestamp="2025-10-08 06:46:48 +0000 UTC" firstStartedPulling="2025-10-08 06:46:50.036018764 +0000 UTC m=+753.165711365" lastFinishedPulling="2025-10-08 06:46:51.498566869 +0000 UTC m=+754.628259470" observedRunningTime="2025-10-08 06:46:52.215764339 +0000 UTC m=+755.345456960" watchObservedRunningTime="2025-10-08 06:46:52.219154079 +0000 UTC m=+755.348846680" Oct 08 06:46:52 crc kubenswrapper[4958]: I1008 06:46:52.235123 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-t28hq" podStartSLOduration=1.7277038999999998 podStartE2EDuration="4.235096941s" podCreationTimestamp="2025-10-08 06:46:48 +0000 UTC" firstStartedPulling="2025-10-08 06:46:48.98673565 +0000 UTC m=+752.116428251" lastFinishedPulling="2025-10-08 06:46:51.494128691 +0000 UTC m=+754.623821292" observedRunningTime="2025-10-08 06:46:52.233274613 +0000 UTC m=+755.362967234" watchObservedRunningTime="2025-10-08 06:46:52.235096941 +0000 UTC m=+755.364789552" Oct 08 06:46:53 crc kubenswrapper[4958]: I1008 06:46:53.217079 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" event={"ID":"b2433751-371a-4030-94ff-aff641121a0a","Type":"ContainerStarted","Data":"b98b8af1897a26eb30738b5b945d27d26390461fa93e733500093589bd382d5f"} Oct 08 06:46:53 crc kubenswrapper[4958]: I1008 06:46:53.232567 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-f4z6n" podStartSLOduration=2.7785661360000002 podStartE2EDuration="5.232549903s" podCreationTimestamp="2025-10-08 06:46:48 +0000 UTC" firstStartedPulling="2025-10-08 06:46:50.136257228 +0000 UTC m=+753.265949819" lastFinishedPulling="2025-10-08 06:46:52.590240985 +0000 UTC m=+755.719933586" observedRunningTime="2025-10-08 06:46:53.231613728 +0000 UTC m=+756.361306399" watchObservedRunningTime="2025-10-08 06:46:53.232549903 +0000 UTC m=+756.362242534" Oct 08 06:46:54 crc kubenswrapper[4958]: I1008 06:46:54.224752 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" event={"ID":"0e3927d1-6c4c-4758-ba2d-455aea8dd388","Type":"ContainerStarted","Data":"237111ae3191c964378019c23f42d2574d0ad198f06f69d6416978ed143478a2"} Oct 08 06:46:54 crc kubenswrapper[4958]: I1008 06:46:54.252088 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-ft9jc" podStartSLOduration=1.741127165 podStartE2EDuration="6.252058638s" podCreationTimestamp="2025-10-08 06:46:48 +0000 UTC" firstStartedPulling="2025-10-08 06:46:49.341698109 +0000 UTC m=+752.471390710" lastFinishedPulling="2025-10-08 06:46:53.852629572 +0000 UTC m=+756.982322183" observedRunningTime="2025-10-08 06:46:54.246902291 +0000 UTC m=+757.376594932" watchObservedRunningTime="2025-10-08 06:46:54.252058638 +0000 UTC m=+757.381751279" Oct 08 06:46:58 crc kubenswrapper[4958]: I1008 06:46:58.985869 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-t28hq" Oct 08 06:46:59 crc kubenswrapper[4958]: I1008 06:46:59.786943 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:59 crc kubenswrapper[4958]: I1008 06:46:59.787424 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:46:59 crc kubenswrapper[4958]: I1008 06:46:59.794247 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:47:00 crc kubenswrapper[4958]: I1008 06:47:00.276102 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bb7859b9b-7pn27" Oct 08 06:47:00 crc kubenswrapper[4958]: I1008 06:47:00.355596 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-86rhc"] Oct 08 06:47:09 crc kubenswrapper[4958]: I1008 06:47:09.529017 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-kxhxj" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.540172 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dptdb"] Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.542110 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.545214 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dptdb"] Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.640083 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-utilities\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.640244 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsxh\" (UniqueName: \"kubernetes.io/projected/ffb2a2cd-025a-4095-8248-399de8542a23-kube-api-access-pbsxh\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.640313 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-catalog-content\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.741260 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-utilities\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.741532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsxh\" (UniqueName: \"kubernetes.io/projected/ffb2a2cd-025a-4095-8248-399de8542a23-kube-api-access-pbsxh\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.741558 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-catalog-content\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.741829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-catalog-content\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.741830 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-utilities\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.760904 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsxh\" (UniqueName: \"kubernetes.io/projected/ffb2a2cd-025a-4095-8248-399de8542a23-kube-api-access-pbsxh\") pod \"community-operators-dptdb\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:17 crc kubenswrapper[4958]: I1008 06:47:17.864872 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:18 crc kubenswrapper[4958]: I1008 06:47:18.504357 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dptdb"] Oct 08 06:47:19 crc kubenswrapper[4958]: I1008 06:47:19.434848 4958 generic.go:334] "Generic (PLEG): container finished" podID="ffb2a2cd-025a-4095-8248-399de8542a23" containerID="46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614" exitCode=0 Oct 08 06:47:19 crc kubenswrapper[4958]: I1008 06:47:19.435195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerDied","Data":"46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614"} Oct 08 06:47:19 crc kubenswrapper[4958]: I1008 06:47:19.435360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerStarted","Data":"2f3a87ec013a9bf1903c271e7e22476e14d8d4b241040ee83310a1508527b200"} Oct 08 06:47:20 crc kubenswrapper[4958]: I1008 06:47:20.447442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerStarted","Data":"8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174"} Oct 08 06:47:21 crc kubenswrapper[4958]: I1008 06:47:21.457544 4958 generic.go:334] "Generic (PLEG): container finished" podID="ffb2a2cd-025a-4095-8248-399de8542a23" containerID="8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174" exitCode=0 Oct 08 06:47:21 crc kubenswrapper[4958]: I1008 06:47:21.457645 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerDied","Data":"8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174"} Oct 08 06:47:22 crc kubenswrapper[4958]: I1008 06:47:22.468793 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerStarted","Data":"ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05"} Oct 08 06:47:22 crc kubenswrapper[4958]: I1008 06:47:22.495868 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dptdb" podStartSLOduration=3.104406311 podStartE2EDuration="5.495845023s" podCreationTimestamp="2025-10-08 06:47:17 +0000 UTC" firstStartedPulling="2025-10-08 06:47:19.437489552 +0000 UTC m=+782.567182163" lastFinishedPulling="2025-10-08 06:47:21.828928274 +0000 UTC m=+784.958620875" observedRunningTime="2025-10-08 06:47:22.491090045 +0000 UTC m=+785.620782666" watchObservedRunningTime="2025-10-08 06:47:22.495845023 +0000 UTC m=+785.625537634" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.177892 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj"] Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.180799 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.183775 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.190472 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj"] Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.342399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.342480 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.342592 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbq92\" (UniqueName: \"kubernetes.io/projected/d06f1d78-b394-4063-9af1-a53ec1eaae2b-kube-api-access-pbq92\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.419366 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-86rhc" podUID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" containerName="console" containerID="cri-o://86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126" gracePeriod=15 Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.444354 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.444585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.444801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbq92\" (UniqueName: \"kubernetes.io/projected/d06f1d78-b394-4063-9af1-a53ec1eaae2b-kube-api-access-pbq92\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.445148 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.445264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.479878 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbq92\" (UniqueName: \"kubernetes.io/projected/d06f1d78-b394-4063-9af1-a53ec1eaae2b-kube-api-access-pbq92\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.500693 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:25 crc kubenswrapper[4958]: I1008 06:47:25.964469 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj"] Oct 08 06:47:25 crc kubenswrapper[4958]: W1008 06:47:25.982593 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd06f1d78_b394_4063_9af1_a53ec1eaae2b.slice/crio-c22fc64ce20ea9e9588d2bb53a98da8ef07777c8174fb6948ef669fd1c85f69e WatchSource:0}: Error finding container c22fc64ce20ea9e9588d2bb53a98da8ef07777c8174fb6948ef669fd1c85f69e: Status 404 returned error can't find the container with id c22fc64ce20ea9e9588d2bb53a98da8ef07777c8174fb6948ef669fd1c85f69e Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.133766 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-86rhc_31a3bc7e-1303-46e9-a3bd-b7e10c6884bd/console/0.log" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.133894 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.255899 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-oauth-serving-cert\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.256042 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-trusted-ca-bundle\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.256135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-oauth-config\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.256194 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-serving-cert\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.256238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-kube-api-access-dks2m\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.256338 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-config\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.256383 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-service-ca\") pod \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\" (UID: \"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd\") " Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.257564 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.257585 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-service-ca" (OuterVolumeSpecName: "service-ca") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.257922 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-config" (OuterVolumeSpecName: "console-config") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.258380 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.262033 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-kube-api-access-dks2m" (OuterVolumeSpecName: "kube-api-access-dks2m") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "kube-api-access-dks2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.262455 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.262816 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" (UID: "31a3bc7e-1303-46e9-a3bd-b7e10c6884bd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360504 4958 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360612 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360640 4958 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360715 4958 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360740 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-kube-api-access-dks2m\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360818 4958 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-console-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.360841 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.507341 4958 generic.go:334] "Generic (PLEG): container finished" podID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerID="07c3bf4673267a0bc09e4c2fc4ab9e8e200804d0e62f916e9b9859f05a45b169" exitCode=0 Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.507407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" event={"ID":"d06f1d78-b394-4063-9af1-a53ec1eaae2b","Type":"ContainerDied","Data":"07c3bf4673267a0bc09e4c2fc4ab9e8e200804d0e62f916e9b9859f05a45b169"} Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.507486 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" event={"ID":"d06f1d78-b394-4063-9af1-a53ec1eaae2b","Type":"ContainerStarted","Data":"c22fc64ce20ea9e9588d2bb53a98da8ef07777c8174fb6948ef669fd1c85f69e"} Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.516157 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-86rhc_31a3bc7e-1303-46e9-a3bd-b7e10c6884bd/console/0.log" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.516242 4958 generic.go:334] "Generic (PLEG): container finished" podID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" containerID="86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126" exitCode=2 Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.516413 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-86rhc" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.516290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-86rhc" event={"ID":"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd","Type":"ContainerDied","Data":"86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126"} Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.524405 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-86rhc" event={"ID":"31a3bc7e-1303-46e9-a3bd-b7e10c6884bd","Type":"ContainerDied","Data":"ff23ea4fb74c0059ef3714beb0d29d64d1a939ab6724b15014484c5b137f33ca"} Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.524450 4958 scope.go:117] "RemoveContainer" containerID="86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.578776 4958 scope.go:117] "RemoveContainer" containerID="86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126" Oct 08 06:47:26 crc kubenswrapper[4958]: E1008 06:47:26.579399 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126\": container with ID starting with 86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126 not found: ID does not exist" containerID="86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.579456 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126"} err="failed to get container status \"86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126\": rpc error: code = NotFound desc = could not find container \"86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126\": container with ID starting with 86a9dec21d1ed867d24136ea4b96c1ba1138bb39f3736c54bc8b0df6ac3bc126 not found: ID does not exist" Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.593187 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-86rhc"] Oct 08 06:47:26 crc kubenswrapper[4958]: I1008 06:47:26.598899 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-86rhc"] Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.531409 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-krsvk"] Oct 08 06:47:27 crc kubenswrapper[4958]: E1008 06:47:27.531773 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" containerName="console" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.531794 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" containerName="console" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.532004 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" containerName="console" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.533340 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.545331 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-krsvk"] Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.614117 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a3bc7e-1303-46e9-a3bd-b7e10c6884bd" path="/var/lib/kubelet/pods/31a3bc7e-1303-46e9-a3bd-b7e10c6884bd/volumes" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.689029 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-catalog-content\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.689088 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcvp\" (UniqueName: \"kubernetes.io/projected/aac57386-fd4c-4adc-a8a7-d948e8fa204b-kube-api-access-glcvp\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.689125 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-utilities\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.791080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-utilities\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.791322 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-catalog-content\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.791379 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glcvp\" (UniqueName: \"kubernetes.io/projected/aac57386-fd4c-4adc-a8a7-d948e8fa204b-kube-api-access-glcvp\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.792448 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-catalog-content\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.792644 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-utilities\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.822147 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glcvp\" (UniqueName: \"kubernetes.io/projected/aac57386-fd4c-4adc-a8a7-d948e8fa204b-kube-api-access-glcvp\") pod \"redhat-operators-krsvk\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.866277 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.867229 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.869281 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:27 crc kubenswrapper[4958]: I1008 06:47:27.919545 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.323116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-krsvk"] Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.542935 4958 generic.go:334] "Generic (PLEG): container finished" podID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerID="eab310b7b00532d43194e0c100bbf7fdd214fbe92b17c5842c320a89bfa4d833" exitCode=0 Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.543023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" event={"ID":"d06f1d78-b394-4063-9af1-a53ec1eaae2b","Type":"ContainerDied","Data":"eab310b7b00532d43194e0c100bbf7fdd214fbe92b17c5842c320a89bfa4d833"} Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.544887 4958 generic.go:334] "Generic (PLEG): container finished" podID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerID="991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428" exitCode=0 Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.544970 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerDied","Data":"991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428"} Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.545015 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerStarted","Data":"6e66f6eed455dd9b6014814543e3daf2f120aaed6cbfa79c47ca9272cda3050f"} Oct 08 06:47:28 crc kubenswrapper[4958]: I1008 06:47:28.594930 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:29 crc kubenswrapper[4958]: I1008 06:47:29.554985 4958 generic.go:334] "Generic (PLEG): container finished" podID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerID="2e96acd68daa130c734a2d5af1f10753d9314f17f1c99152b4a2e1ebf318d916" exitCode=0 Oct 08 06:47:29 crc kubenswrapper[4958]: I1008 06:47:29.555095 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" event={"ID":"d06f1d78-b394-4063-9af1-a53ec1eaae2b","Type":"ContainerDied","Data":"2e96acd68daa130c734a2d5af1f10753d9314f17f1c99152b4a2e1ebf318d916"} Oct 08 06:47:29 crc kubenswrapper[4958]: I1008 06:47:29.558697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerStarted","Data":"d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0"} Oct 08 06:47:30 crc kubenswrapper[4958]: I1008 06:47:30.568593 4958 generic.go:334] "Generic (PLEG): container finished" podID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerID="d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0" exitCode=0 Oct 08 06:47:30 crc kubenswrapper[4958]: I1008 06:47:30.568639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerDied","Data":"d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0"} Oct 08 06:47:30 crc kubenswrapper[4958]: I1008 06:47:30.909241 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.069384 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-bundle\") pod \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.069488 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-util\") pod \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.069574 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbq92\" (UniqueName: \"kubernetes.io/projected/d06f1d78-b394-4063-9af1-a53ec1eaae2b-kube-api-access-pbq92\") pod \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\" (UID: \"d06f1d78-b394-4063-9af1-a53ec1eaae2b\") " Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.072018 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-bundle" (OuterVolumeSpecName: "bundle") pod "d06f1d78-b394-4063-9af1-a53ec1eaae2b" (UID: "d06f1d78-b394-4063-9af1-a53ec1eaae2b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.078096 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06f1d78-b394-4063-9af1-a53ec1eaae2b-kube-api-access-pbq92" (OuterVolumeSpecName: "kube-api-access-pbq92") pod "d06f1d78-b394-4063-9af1-a53ec1eaae2b" (UID: "d06f1d78-b394-4063-9af1-a53ec1eaae2b"). InnerVolumeSpecName "kube-api-access-pbq92". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.100411 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-util" (OuterVolumeSpecName: "util") pod "d06f1d78-b394-4063-9af1-a53ec1eaae2b" (UID: "d06f1d78-b394-4063-9af1-a53ec1eaae2b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.171329 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.171377 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d06f1d78-b394-4063-9af1-a53ec1eaae2b-util\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.171396 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbq92\" (UniqueName: \"kubernetes.io/projected/d06f1d78-b394-4063-9af1-a53ec1eaae2b-kube-api-access-pbq92\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.511759 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dptdb"] Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.581047 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.584801 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj" event={"ID":"d06f1d78-b394-4063-9af1-a53ec1eaae2b","Type":"ContainerDied","Data":"c22fc64ce20ea9e9588d2bb53a98da8ef07777c8174fb6948ef669fd1c85f69e"} Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.584849 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c22fc64ce20ea9e9588d2bb53a98da8ef07777c8174fb6948ef669fd1c85f69e" Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.585804 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dptdb" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="registry-server" containerID="cri-o://ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05" gracePeriod=2 Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.587069 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerStarted","Data":"0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250"} Oct 08 06:47:31 crc kubenswrapper[4958]: I1008 06:47:31.625772 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-krsvk" podStartSLOduration=2.163294023 podStartE2EDuration="4.625751496s" podCreationTimestamp="2025-10-08 06:47:27 +0000 UTC" firstStartedPulling="2025-10-08 06:47:28.546171279 +0000 UTC m=+791.675863890" lastFinishedPulling="2025-10-08 06:47:31.008628732 +0000 UTC m=+794.138321363" observedRunningTime="2025-10-08 06:47:31.623805884 +0000 UTC m=+794.753498555" watchObservedRunningTime="2025-10-08 06:47:31.625751496 +0000 UTC m=+794.755444107" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.054827 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.186582 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbsxh\" (UniqueName: \"kubernetes.io/projected/ffb2a2cd-025a-4095-8248-399de8542a23-kube-api-access-pbsxh\") pod \"ffb2a2cd-025a-4095-8248-399de8542a23\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.186671 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-utilities\") pod \"ffb2a2cd-025a-4095-8248-399de8542a23\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.186719 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-catalog-content\") pod \"ffb2a2cd-025a-4095-8248-399de8542a23\" (UID: \"ffb2a2cd-025a-4095-8248-399de8542a23\") " Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.189162 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-utilities" (OuterVolumeSpecName: "utilities") pod "ffb2a2cd-025a-4095-8248-399de8542a23" (UID: "ffb2a2cd-025a-4095-8248-399de8542a23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.192353 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb2a2cd-025a-4095-8248-399de8542a23-kube-api-access-pbsxh" (OuterVolumeSpecName: "kube-api-access-pbsxh") pod "ffb2a2cd-025a-4095-8248-399de8542a23" (UID: "ffb2a2cd-025a-4095-8248-399de8542a23"). InnerVolumeSpecName "kube-api-access-pbsxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.254824 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffb2a2cd-025a-4095-8248-399de8542a23" (UID: "ffb2a2cd-025a-4095-8248-399de8542a23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.288559 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbsxh\" (UniqueName: \"kubernetes.io/projected/ffb2a2cd-025a-4095-8248-399de8542a23-kube-api-access-pbsxh\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.288612 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.288633 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb2a2cd-025a-4095-8248-399de8542a23-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.595523 4958 generic.go:334] "Generic (PLEG): container finished" podID="ffb2a2cd-025a-4095-8248-399de8542a23" containerID="ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05" exitCode=0 Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.596456 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerDied","Data":"ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05"} Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.596688 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dptdb" event={"ID":"ffb2a2cd-025a-4095-8248-399de8542a23","Type":"ContainerDied","Data":"2f3a87ec013a9bf1903c271e7e22476e14d8d4b241040ee83310a1508527b200"} Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.596870 4958 scope.go:117] "RemoveContainer" containerID="ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.597203 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dptdb" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.617261 4958 scope.go:117] "RemoveContainer" containerID="8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.652051 4958 scope.go:117] "RemoveContainer" containerID="46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.653005 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dptdb"] Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.657852 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dptdb"] Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.671524 4958 scope.go:117] "RemoveContainer" containerID="ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05" Oct 08 06:47:32 crc kubenswrapper[4958]: E1008 06:47:32.672914 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05\": container with ID starting with ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05 not found: ID does not exist" containerID="ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.672981 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05"} err="failed to get container status \"ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05\": rpc error: code = NotFound desc = could not find container \"ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05\": container with ID starting with ef5326f32e3ceef7c166ad17b0e500f8247251d109f8d810aaef9499a709de05 not found: ID does not exist" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.673011 4958 scope.go:117] "RemoveContainer" containerID="8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174" Oct 08 06:47:32 crc kubenswrapper[4958]: E1008 06:47:32.673521 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174\": container with ID starting with 8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174 not found: ID does not exist" containerID="8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.673577 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174"} err="failed to get container status \"8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174\": rpc error: code = NotFound desc = could not find container \"8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174\": container with ID starting with 8a421211702dfd2dc0f498f88737a13eeeda3e3fddf9f3b6b947d0448714f174 not found: ID does not exist" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.673620 4958 scope.go:117] "RemoveContainer" containerID="46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614" Oct 08 06:47:32 crc kubenswrapper[4958]: E1008 06:47:32.674230 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614\": container with ID starting with 46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614 not found: ID does not exist" containerID="46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614" Oct 08 06:47:32 crc kubenswrapper[4958]: I1008 06:47:32.674273 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614"} err="failed to get container status \"46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614\": rpc error: code = NotFound desc = could not find container \"46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614\": container with ID starting with 46e715d4d637feba5b407a8448fdca4bc5b693516233adde4a09a6db19d35614 not found: ID does not exist" Oct 08 06:47:33 crc kubenswrapper[4958]: I1008 06:47:33.588741 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" path="/var/lib/kubelet/pods/ffb2a2cd-025a-4095-8248-399de8542a23/volumes" Oct 08 06:47:36 crc kubenswrapper[4958]: I1008 06:47:36.844921 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:47:36 crc kubenswrapper[4958]: I1008 06:47:36.845009 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:47:37 crc kubenswrapper[4958]: I1008 06:47:37.869893 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:37 crc kubenswrapper[4958]: I1008 06:47:37.869971 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:37 crc kubenswrapper[4958]: I1008 06:47:37.924911 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:38 crc kubenswrapper[4958]: I1008 06:47:38.708458 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:40 crc kubenswrapper[4958]: I1008 06:47:40.308833 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-krsvk"] Oct 08 06:47:40 crc kubenswrapper[4958]: I1008 06:47:40.659239 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-krsvk" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="registry-server" containerID="cri-o://0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250" gracePeriod=2 Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.165663 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.298558 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-catalog-content\") pod \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.298691 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glcvp\" (UniqueName: \"kubernetes.io/projected/aac57386-fd4c-4adc-a8a7-d948e8fa204b-kube-api-access-glcvp\") pod \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.298714 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-utilities\") pod \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\" (UID: \"aac57386-fd4c-4adc-a8a7-d948e8fa204b\") " Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.299573 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-utilities" (OuterVolumeSpecName: "utilities") pod "aac57386-fd4c-4adc-a8a7-d948e8fa204b" (UID: "aac57386-fd4c-4adc-a8a7-d948e8fa204b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.306037 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac57386-fd4c-4adc-a8a7-d948e8fa204b-kube-api-access-glcvp" (OuterVolumeSpecName: "kube-api-access-glcvp") pod "aac57386-fd4c-4adc-a8a7-d948e8fa204b" (UID: "aac57386-fd4c-4adc-a8a7-d948e8fa204b"). InnerVolumeSpecName "kube-api-access-glcvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.389421 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aac57386-fd4c-4adc-a8a7-d948e8fa204b" (UID: "aac57386-fd4c-4adc-a8a7-d948e8fa204b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.400262 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glcvp\" (UniqueName: \"kubernetes.io/projected/aac57386-fd4c-4adc-a8a7-d948e8fa204b-kube-api-access-glcvp\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.400302 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.400317 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac57386-fd4c-4adc-a8a7-d948e8fa204b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419150 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv"] Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419400 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="extract-utilities" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419414 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="extract-utilities" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419425 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="pull" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419432 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="pull" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419445 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="extract-utilities" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419452 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="extract-utilities" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419461 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="extract" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419467 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="extract" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419483 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="util" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419489 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="util" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419500 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="extract-content" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419507 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="extract-content" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419518 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="extract-content" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419525 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="extract-content" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419535 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="registry-server" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419541 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="registry-server" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.419549 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="registry-server" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419555 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="registry-server" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419640 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb2a2cd-025a-4095-8248-399de8542a23" containerName="registry-server" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419649 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerName="registry-server" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.419661 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06f1d78-b394-4063-9af1-a53ec1eaae2b" containerName="extract" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.420027 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.421966 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.422318 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xdqhk" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.422322 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.422638 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.425458 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.442709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv"] Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.602677 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de7476e7-bd6f-44c3-a831-2143114d89be-apiservice-cert\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.602765 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de7476e7-bd6f-44c3-a831-2143114d89be-webhook-cert\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.602797 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bklk\" (UniqueName: \"kubernetes.io/projected/de7476e7-bd6f-44c3-a831-2143114d89be-kube-api-access-4bklk\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.665182 4958 generic.go:334] "Generic (PLEG): container finished" podID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" containerID="0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250" exitCode=0 Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.665220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerDied","Data":"0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250"} Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.665243 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-krsvk" event={"ID":"aac57386-fd4c-4adc-a8a7-d948e8fa204b","Type":"ContainerDied","Data":"6e66f6eed455dd9b6014814543e3daf2f120aaed6cbfa79c47ca9272cda3050f"} Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.665259 4958 scope.go:117] "RemoveContainer" containerID="0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.665367 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-krsvk" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.677462 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-krsvk"] Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.680198 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-krsvk"] Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.685342 4958 scope.go:117] "RemoveContainer" containerID="d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.702243 4958 scope.go:117] "RemoveContainer" containerID="991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.703465 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de7476e7-bd6f-44c3-a831-2143114d89be-apiservice-cert\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.703581 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de7476e7-bd6f-44c3-a831-2143114d89be-webhook-cert\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.703618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bklk\" (UniqueName: \"kubernetes.io/projected/de7476e7-bd6f-44c3-a831-2143114d89be-kube-api-access-4bklk\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.707073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de7476e7-bd6f-44c3-a831-2143114d89be-apiservice-cert\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.707881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de7476e7-bd6f-44c3-a831-2143114d89be-webhook-cert\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.726463 4958 scope.go:117] "RemoveContainer" containerID="0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.727230 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bklk\" (UniqueName: \"kubernetes.io/projected/de7476e7-bd6f-44c3-a831-2143114d89be-kube-api-access-4bklk\") pod \"metallb-operator-controller-manager-6c464c978f-r4jsv\" (UID: \"de7476e7-bd6f-44c3-a831-2143114d89be\") " pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.729585 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250\": container with ID starting with 0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250 not found: ID does not exist" containerID="0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.729630 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250"} err="failed to get container status \"0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250\": rpc error: code = NotFound desc = could not find container \"0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250\": container with ID starting with 0369fef6e21ec7c02fae91282beca608589129fffd2f7e9aa3efcf263da24250 not found: ID does not exist" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.729657 4958 scope.go:117] "RemoveContainer" containerID="d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.730023 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0\": container with ID starting with d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0 not found: ID does not exist" containerID="d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.730099 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0"} err="failed to get container status \"d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0\": rpc error: code = NotFound desc = could not find container \"d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0\": container with ID starting with d1bc43495257f5db264858d7f30d2dab62a0c5e482bd16b7a2bcbffb224d4cd0 not found: ID does not exist" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.730179 4958 scope.go:117] "RemoveContainer" containerID="991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428" Oct 08 06:47:41 crc kubenswrapper[4958]: E1008 06:47:41.730617 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428\": container with ID starting with 991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428 not found: ID does not exist" containerID="991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.730687 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428"} err="failed to get container status \"991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428\": rpc error: code = NotFound desc = could not find container \"991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428\": container with ID starting with 991bef9c404df655a3ebda42c09f830bb1f4067a582a8b5c37f5ad5e3cfef428 not found: ID does not exist" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.731985 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.750930 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t"] Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.751754 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.756145 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.758445 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.758590 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-d5jcc" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.812748 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t"] Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.906088 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2g6\" (UniqueName: \"kubernetes.io/projected/28bc683e-4fe9-4276-96c4-4b63e96f368d-kube-api-access-9k2g6\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.906392 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28bc683e-4fe9-4276-96c4-4b63e96f368d-apiservice-cert\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:41 crc kubenswrapper[4958]: I1008 06:47:41.906449 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28bc683e-4fe9-4276-96c4-4b63e96f368d-webhook-cert\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.007071 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28bc683e-4fe9-4276-96c4-4b63e96f368d-apiservice-cert\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.007155 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28bc683e-4fe9-4276-96c4-4b63e96f368d-webhook-cert\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.007205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2g6\" (UniqueName: \"kubernetes.io/projected/28bc683e-4fe9-4276-96c4-4b63e96f368d-kube-api-access-9k2g6\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.012244 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28bc683e-4fe9-4276-96c4-4b63e96f368d-webhook-cert\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.012292 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28bc683e-4fe9-4276-96c4-4b63e96f368d-apiservice-cert\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.013391 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv"] Oct 08 06:47:42 crc kubenswrapper[4958]: W1008 06:47:42.021319 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde7476e7_bd6f_44c3_a831_2143114d89be.slice/crio-1e40cb2f563ef2c992618d26032696f67dead55d0c36228ec12bf4d6b469c521 WatchSource:0}: Error finding container 1e40cb2f563ef2c992618d26032696f67dead55d0c36228ec12bf4d6b469c521: Status 404 returned error can't find the container with id 1e40cb2f563ef2c992618d26032696f67dead55d0c36228ec12bf4d6b469c521 Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.032676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2g6\" (UniqueName: \"kubernetes.io/projected/28bc683e-4fe9-4276-96c4-4b63e96f368d-kube-api-access-9k2g6\") pod \"metallb-operator-webhook-server-67bdd4c657-nfh8t\" (UID: \"28bc683e-4fe9-4276-96c4-4b63e96f368d\") " pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.068007 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.275667 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t"] Oct 08 06:47:42 crc kubenswrapper[4958]: W1008 06:47:42.288320 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28bc683e_4fe9_4276_96c4_4b63e96f368d.slice/crio-c5a36649f869534e8939cbc450f5b11cc5937172bff21bd59560dd6e8d1f3700 WatchSource:0}: Error finding container c5a36649f869534e8939cbc450f5b11cc5937172bff21bd59560dd6e8d1f3700: Status 404 returned error can't find the container with id c5a36649f869534e8939cbc450f5b11cc5937172bff21bd59560dd6e8d1f3700 Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.676682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" event={"ID":"28bc683e-4fe9-4276-96c4-4b63e96f368d","Type":"ContainerStarted","Data":"c5a36649f869534e8939cbc450f5b11cc5937172bff21bd59560dd6e8d1f3700"} Oct 08 06:47:42 crc kubenswrapper[4958]: I1008 06:47:42.678847 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" event={"ID":"de7476e7-bd6f-44c3-a831-2143114d89be","Type":"ContainerStarted","Data":"1e40cb2f563ef2c992618d26032696f67dead55d0c36228ec12bf4d6b469c521"} Oct 08 06:47:43 crc kubenswrapper[4958]: I1008 06:47:43.590237 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac57386-fd4c-4adc-a8a7-d948e8fa204b" path="/var/lib/kubelet/pods/aac57386-fd4c-4adc-a8a7-d948e8fa204b/volumes" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.522176 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2n295"] Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.523988 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.540058 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n295"] Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.663084 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-catalog-content\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.663145 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tv5\" (UniqueName: \"kubernetes.io/projected/240d9f60-7530-45f2-b225-439ce52f6ca3-kube-api-access-67tv5\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.663481 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-utilities\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.702315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" event={"ID":"de7476e7-bd6f-44c3-a831-2143114d89be","Type":"ContainerStarted","Data":"e157cfb0b7e306b0f22da1728d3fb3da1f41ae3ebd4de375785cbced6ee92861"} Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.702483 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.724365 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" podStartSLOduration=1.650993878 podStartE2EDuration="4.724346167s" podCreationTimestamp="2025-10-08 06:47:41 +0000 UTC" firstStartedPulling="2025-10-08 06:47:42.025026725 +0000 UTC m=+805.154719326" lastFinishedPulling="2025-10-08 06:47:45.098379004 +0000 UTC m=+808.228071615" observedRunningTime="2025-10-08 06:47:45.719861586 +0000 UTC m=+808.849554207" watchObservedRunningTime="2025-10-08 06:47:45.724346167 +0000 UTC m=+808.854038778" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.764231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67tv5\" (UniqueName: \"kubernetes.io/projected/240d9f60-7530-45f2-b225-439ce52f6ca3-kube-api-access-67tv5\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.764341 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-utilities\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.764883 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-utilities\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.764989 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-catalog-content\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.764389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-catalog-content\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.781143 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67tv5\" (UniqueName: \"kubernetes.io/projected/240d9f60-7530-45f2-b225-439ce52f6ca3-kube-api-access-67tv5\") pod \"certified-operators-2n295\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:45 crc kubenswrapper[4958]: I1008 06:47:45.836779 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:46 crc kubenswrapper[4958]: I1008 06:47:46.326364 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2n295"] Oct 08 06:47:46 crc kubenswrapper[4958]: I1008 06:47:46.708925 4958 generic.go:334] "Generic (PLEG): container finished" podID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerID="4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7" exitCode=0 Oct 08 06:47:46 crc kubenswrapper[4958]: I1008 06:47:46.709422 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n295" event={"ID":"240d9f60-7530-45f2-b225-439ce52f6ca3","Type":"ContainerDied","Data":"4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7"} Oct 08 06:47:46 crc kubenswrapper[4958]: I1008 06:47:46.709473 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n295" event={"ID":"240d9f60-7530-45f2-b225-439ce52f6ca3","Type":"ContainerStarted","Data":"c7714fca06d12a9f6260d3aa00c74361d7a7f6caefd9181a60dd85a49a0b302e"} Oct 08 06:47:49 crc kubenswrapper[4958]: I1008 06:47:49.739170 4958 generic.go:334] "Generic (PLEG): container finished" podID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerID="12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f" exitCode=0 Oct 08 06:47:49 crc kubenswrapper[4958]: I1008 06:47:49.739398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n295" event={"ID":"240d9f60-7530-45f2-b225-439ce52f6ca3","Type":"ContainerDied","Data":"12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f"} Oct 08 06:47:49 crc kubenswrapper[4958]: I1008 06:47:49.742233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" event={"ID":"28bc683e-4fe9-4276-96c4-4b63e96f368d","Type":"ContainerStarted","Data":"7d4084d632b8e4912abb8a21a2cbea128929a31d1115099fe1de58e62d2deb7a"} Oct 08 06:47:49 crc kubenswrapper[4958]: I1008 06:47:49.742793 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:47:49 crc kubenswrapper[4958]: I1008 06:47:49.783876 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" podStartSLOduration=2.323354145 podStartE2EDuration="8.783850016s" podCreationTimestamp="2025-10-08 06:47:41 +0000 UTC" firstStartedPulling="2025-10-08 06:47:42.290667329 +0000 UTC m=+805.420359930" lastFinishedPulling="2025-10-08 06:47:48.7511632 +0000 UTC m=+811.880855801" observedRunningTime="2025-10-08 06:47:49.777011891 +0000 UTC m=+812.906704532" watchObservedRunningTime="2025-10-08 06:47:49.783850016 +0000 UTC m=+812.913542627" Oct 08 06:47:50 crc kubenswrapper[4958]: I1008 06:47:50.750900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n295" event={"ID":"240d9f60-7530-45f2-b225-439ce52f6ca3","Type":"ContainerStarted","Data":"6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af"} Oct 08 06:47:50 crc kubenswrapper[4958]: I1008 06:47:50.779729 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2n295" podStartSLOduration=2.19795136 podStartE2EDuration="5.779702237s" podCreationTimestamp="2025-10-08 06:47:45 +0000 UTC" firstStartedPulling="2025-10-08 06:47:46.710838386 +0000 UTC m=+809.840530987" lastFinishedPulling="2025-10-08 06:47:50.292589253 +0000 UTC m=+813.422281864" observedRunningTime="2025-10-08 06:47:50.773838629 +0000 UTC m=+813.903531250" watchObservedRunningTime="2025-10-08 06:47:50.779702237 +0000 UTC m=+813.909394858" Oct 08 06:47:55 crc kubenswrapper[4958]: I1008 06:47:55.837172 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:55 crc kubenswrapper[4958]: I1008 06:47:55.838209 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:55 crc kubenswrapper[4958]: I1008 06:47:55.902086 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:56 crc kubenswrapper[4958]: I1008 06:47:56.851172 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:47:58 crc kubenswrapper[4958]: I1008 06:47:58.310366 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n295"] Oct 08 06:47:59 crc kubenswrapper[4958]: I1008 06:47:59.808487 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2n295" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="registry-server" containerID="cri-o://6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af" gracePeriod=2 Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.245445 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.378295 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-catalog-content\") pod \"240d9f60-7530-45f2-b225-439ce52f6ca3\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.378471 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67tv5\" (UniqueName: \"kubernetes.io/projected/240d9f60-7530-45f2-b225-439ce52f6ca3-kube-api-access-67tv5\") pod \"240d9f60-7530-45f2-b225-439ce52f6ca3\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.378520 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-utilities\") pod \"240d9f60-7530-45f2-b225-439ce52f6ca3\" (UID: \"240d9f60-7530-45f2-b225-439ce52f6ca3\") " Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.379891 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-utilities" (OuterVolumeSpecName: "utilities") pod "240d9f60-7530-45f2-b225-439ce52f6ca3" (UID: "240d9f60-7530-45f2-b225-439ce52f6ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.387895 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240d9f60-7530-45f2-b225-439ce52f6ca3-kube-api-access-67tv5" (OuterVolumeSpecName: "kube-api-access-67tv5") pod "240d9f60-7530-45f2-b225-439ce52f6ca3" (UID: "240d9f60-7530-45f2-b225-439ce52f6ca3"). InnerVolumeSpecName "kube-api-access-67tv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.455069 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "240d9f60-7530-45f2-b225-439ce52f6ca3" (UID: "240d9f60-7530-45f2-b225-439ce52f6ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.479936 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67tv5\" (UniqueName: \"kubernetes.io/projected/240d9f60-7530-45f2-b225-439ce52f6ca3-kube-api-access-67tv5\") on node \"crc\" DevicePath \"\"" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.480011 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.480030 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d9f60-7530-45f2-b225-439ce52f6ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.828189 4958 generic.go:334] "Generic (PLEG): container finished" podID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerID="6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af" exitCode=0 Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.828446 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n295" event={"ID":"240d9f60-7530-45f2-b225-439ce52f6ca3","Type":"ContainerDied","Data":"6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af"} Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.828476 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2n295" event={"ID":"240d9f60-7530-45f2-b225-439ce52f6ca3","Type":"ContainerDied","Data":"c7714fca06d12a9f6260d3aa00c74361d7a7f6caefd9181a60dd85a49a0b302e"} Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.828496 4958 scope.go:117] "RemoveContainer" containerID="6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.828612 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2n295" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.863749 4958 scope.go:117] "RemoveContainer" containerID="12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.865455 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2n295"] Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.871385 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2n295"] Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.879966 4958 scope.go:117] "RemoveContainer" containerID="4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.910211 4958 scope.go:117] "RemoveContainer" containerID="6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af" Oct 08 06:48:00 crc kubenswrapper[4958]: E1008 06:48:00.911910 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af\": container with ID starting with 6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af not found: ID does not exist" containerID="6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.912013 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af"} err="failed to get container status \"6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af\": rpc error: code = NotFound desc = could not find container \"6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af\": container with ID starting with 6ac058b54767dcc9644e4429d125c7dc55c879b4abc1c5ba88740cdf1236c5af not found: ID does not exist" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.912052 4958 scope.go:117] "RemoveContainer" containerID="12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f" Oct 08 06:48:00 crc kubenswrapper[4958]: E1008 06:48:00.912511 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f\": container with ID starting with 12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f not found: ID does not exist" containerID="12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.912561 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f"} err="failed to get container status \"12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f\": rpc error: code = NotFound desc = could not find container \"12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f\": container with ID starting with 12e6de59af6d3acf4d8e4a6414396fbcae1bbb24ead009a4b0a312de66e9a24f not found: ID does not exist" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.912590 4958 scope.go:117] "RemoveContainer" containerID="4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7" Oct 08 06:48:00 crc kubenswrapper[4958]: E1008 06:48:00.913414 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7\": container with ID starting with 4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7 not found: ID does not exist" containerID="4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7" Oct 08 06:48:00 crc kubenswrapper[4958]: I1008 06:48:00.913502 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7"} err="failed to get container status \"4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7\": rpc error: code = NotFound desc = could not find container \"4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7\": container with ID starting with 4c24405f6f6d755f82dfa95be323e129369d5291501a1db2c463d3c24a0ba0b7 not found: ID does not exist" Oct 08 06:48:01 crc kubenswrapper[4958]: I1008 06:48:01.590384 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" path="/var/lib/kubelet/pods/240d9f60-7530-45f2-b225-439ce52f6ca3/volumes" Oct 08 06:48:02 crc kubenswrapper[4958]: I1008 06:48:02.074656 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-67bdd4c657-nfh8t" Oct 08 06:48:06 crc kubenswrapper[4958]: I1008 06:48:06.845196 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:48:06 crc kubenswrapper[4958]: I1008 06:48:06.845787 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:48:21 crc kubenswrapper[4958]: I1008 06:48:21.736262 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c464c978f-r4jsv" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.617155 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-67rh4"] Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.617453 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="extract-content" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.617475 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="extract-content" Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.617499 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="registry-server" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.617510 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="registry-server" Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.617538 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="extract-utilities" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.617549 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="extract-utilities" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.617718 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="240d9f60-7530-45f2-b225-439ce52f6ca3" containerName="registry-server" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.621982 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.624632 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dwlh2" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.624712 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.624864 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.624975 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g"] Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.625755 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.627796 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.642177 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g"] Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.709112 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-b4xgg"] Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.709938 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.711963 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.712279 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bnwvw" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-conf\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714311 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77rg\" (UniqueName: \"kubernetes.io/projected/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-kube-api-access-m77rg\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714414 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-metrics-certs\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714486 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-reloader\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjhj5\" (UniqueName: \"kubernetes.io/projected/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-kube-api-access-rjhj5\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714624 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-startup\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-metrics\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714792 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-sockets\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.714863 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-cert\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.715392 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.715593 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.718877 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-gtdwt"] Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.719791 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.724281 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.727388 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-gtdwt"] Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.816454 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-metrics-certs\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.816508 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjhj5\" (UniqueName: \"kubernetes.io/projected/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-kube-api-access-rjhj5\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.816530 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-reloader\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.816551 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a010ae87-52ed-4aaf-b47d-75bd4f76803e-cert\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.816570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-startup\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.816590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-metrics\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-reloader\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817090 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftlwg\" (UniqueName: \"kubernetes.io/projected/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-kube-api-access-ftlwg\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817110 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-metrics-certs\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817152 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c9q8\" (UniqueName: \"kubernetes.io/projected/a010ae87-52ed-4aaf-b47d-75bd4f76803e-kube-api-access-8c9q8\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817190 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-sockets\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817228 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a010ae87-52ed-4aaf-b47d-75bd4f76803e-metrics-certs\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817246 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-cert\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817262 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-metallb-excludel2\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-conf\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817319 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m77rg\" (UniqueName: \"kubernetes.io/projected/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-kube-api-access-m77rg\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817331 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-metrics\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.817463 4958 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.817506 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-cert podName:8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a nodeName:}" failed. No retries permitted until 2025-10-08 06:48:23.317491124 +0000 UTC m=+846.447183725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-cert") pod "frr-k8s-webhook-server-64bf5d555-5jj6g" (UID: "8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a") : secret "frr-k8s-webhook-server-cert" not found Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817590 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-sockets\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.817824 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-conf\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.818444 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-frr-startup\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.825161 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-metrics-certs\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.837842 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m77rg\" (UniqueName: \"kubernetes.io/projected/d47b4d36-5b8e-4da8-8896-a5bb3f88a473-kube-api-access-m77rg\") pod \"frr-k8s-67rh4\" (UID: \"d47b4d36-5b8e-4da8-8896-a5bb3f88a473\") " pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.844704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjhj5\" (UniqueName: \"kubernetes.io/projected/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-kube-api-access-rjhj5\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a010ae87-52ed-4aaf-b47d-75bd4f76803e-cert\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918446 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftlwg\" (UniqueName: \"kubernetes.io/projected/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-kube-api-access-ftlwg\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-metrics-certs\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c9q8\" (UniqueName: \"kubernetes.io/projected/a010ae87-52ed-4aaf-b47d-75bd4f76803e-kube-api-access-8c9q8\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918510 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918536 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a010ae87-52ed-4aaf-b47d-75bd4f76803e-metrics-certs\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.918552 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-metallb-excludel2\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.919087 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-metallb-excludel2\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.919529 4958 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 08 06:48:22 crc kubenswrapper[4958]: E1008 06:48:22.919569 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist podName:d7953bb6-201d-4470-96d6-f1ffc75ad4a9 nodeName:}" failed. No retries permitted until 2025-10-08 06:48:23.419557119 +0000 UTC m=+846.549249720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist") pod "speaker-b4xgg" (UID: "d7953bb6-201d-4470-96d6-f1ffc75ad4a9") : secret "metallb-memberlist" not found Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.921628 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.925238 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-metrics-certs\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.933002 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a010ae87-52ed-4aaf-b47d-75bd4f76803e-metrics-certs\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.935622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftlwg\" (UniqueName: \"kubernetes.io/projected/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-kube-api-access-ftlwg\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.936794 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c9q8\" (UniqueName: \"kubernetes.io/projected/a010ae87-52ed-4aaf-b47d-75bd4f76803e-kube-api-access-8c9q8\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.937424 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a010ae87-52ed-4aaf-b47d-75bd4f76803e-cert\") pod \"controller-68d546b9d8-gtdwt\" (UID: \"a010ae87-52ed-4aaf-b47d-75bd4f76803e\") " pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:22 crc kubenswrapper[4958]: I1008 06:48:22.940414 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.033261 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.324452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-cert\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.332870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a-cert\") pod \"frr-k8s-webhook-server-64bf5d555-5jj6g\" (UID: \"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.427209 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:23 crc kubenswrapper[4958]: E1008 06:48:23.427409 4958 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 08 06:48:23 crc kubenswrapper[4958]: E1008 06:48:23.427508 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist podName:d7953bb6-201d-4470-96d6-f1ffc75ad4a9 nodeName:}" failed. No retries permitted until 2025-10-08 06:48:24.427480437 +0000 UTC m=+847.557173078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist") pod "speaker-b4xgg" (UID: "d7953bb6-201d-4470-96d6-f1ffc75ad4a9") : secret "metallb-memberlist" not found Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.492175 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-gtdwt"] Oct 08 06:48:23 crc kubenswrapper[4958]: W1008 06:48:23.497729 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda010ae87_52ed_4aaf_b47d_75bd4f76803e.slice/crio-c8230b21c144b7ada09cbea649a8f8596c541f29f9827c5f567c457158b4712a WatchSource:0}: Error finding container c8230b21c144b7ada09cbea649a8f8596c541f29f9827c5f567c457158b4712a: Status 404 returned error can't find the container with id c8230b21c144b7ada09cbea649a8f8596c541f29f9827c5f567c457158b4712a Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.547766 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.787132 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g"] Oct 08 06:48:23 crc kubenswrapper[4958]: W1008 06:48:23.791316 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c4e8505_657a_4d15_b0d9_a2b7a2cd6b7a.slice/crio-c6ca125786a0647fc47251b4694193c1ad4282eb26ffe0756ab541e30ddada29 WatchSource:0}: Error finding container c6ca125786a0647fc47251b4694193c1ad4282eb26ffe0756ab541e30ddada29: Status 404 returned error can't find the container with id c6ca125786a0647fc47251b4694193c1ad4282eb26ffe0756ab541e30ddada29 Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.993578 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" event={"ID":"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a","Type":"ContainerStarted","Data":"c6ca125786a0647fc47251b4694193c1ad4282eb26ffe0756ab541e30ddada29"} Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.996074 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-gtdwt" event={"ID":"a010ae87-52ed-4aaf-b47d-75bd4f76803e","Type":"ContainerStarted","Data":"0713f38d884f90cc2554afd00d4d2a51fa013ce5c31dbabca73048d9608e03aa"} Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.996106 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-gtdwt" event={"ID":"a010ae87-52ed-4aaf-b47d-75bd4f76803e","Type":"ContainerStarted","Data":"239add17e6a6afc16006ccace892f809cb752dab6463f1f64c9b16835cefcfb5"} Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.996118 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-gtdwt" event={"ID":"a010ae87-52ed-4aaf-b47d-75bd4f76803e","Type":"ContainerStarted","Data":"c8230b21c144b7ada09cbea649a8f8596c541f29f9827c5f567c457158b4712a"} Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.996251 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:23 crc kubenswrapper[4958]: I1008 06:48:23.997459 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"ca46ac89a3f95f3453269e8265a4f1bb1a2de77c2dcc585a3d899e3e0ee05815"} Oct 08 06:48:24 crc kubenswrapper[4958]: I1008 06:48:24.026315 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-gtdwt" podStartSLOduration=2.026272418 podStartE2EDuration="2.026272418s" podCreationTimestamp="2025-10-08 06:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:48:24.013146494 +0000 UTC m=+847.142839115" watchObservedRunningTime="2025-10-08 06:48:24.026272418 +0000 UTC m=+847.155965059" Oct 08 06:48:24 crc kubenswrapper[4958]: I1008 06:48:24.451218 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:24 crc kubenswrapper[4958]: I1008 06:48:24.458428 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d7953bb6-201d-4470-96d6-f1ffc75ad4a9-memberlist\") pod \"speaker-b4xgg\" (UID: \"d7953bb6-201d-4470-96d6-f1ffc75ad4a9\") " pod="metallb-system/speaker-b4xgg" Oct 08 06:48:24 crc kubenswrapper[4958]: I1008 06:48:24.529690 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-b4xgg" Oct 08 06:48:24 crc kubenswrapper[4958]: W1008 06:48:24.574205 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7953bb6_201d_4470_96d6_f1ffc75ad4a9.slice/crio-3e8f7cad7da967ea332ad5f7801a81e56e36705eec3585f28e7209b142ef781b WatchSource:0}: Error finding container 3e8f7cad7da967ea332ad5f7801a81e56e36705eec3585f28e7209b142ef781b: Status 404 returned error can't find the container with id 3e8f7cad7da967ea332ad5f7801a81e56e36705eec3585f28e7209b142ef781b Oct 08 06:48:25 crc kubenswrapper[4958]: I1008 06:48:25.005084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b4xgg" event={"ID":"d7953bb6-201d-4470-96d6-f1ffc75ad4a9","Type":"ContainerStarted","Data":"aeecb0c5ed70caef9a2f901c9b31279f3e885dc4159e397b3e590133936245ed"} Oct 08 06:48:25 crc kubenswrapper[4958]: I1008 06:48:25.005142 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b4xgg" event={"ID":"d7953bb6-201d-4470-96d6-f1ffc75ad4a9","Type":"ContainerStarted","Data":"3e8f7cad7da967ea332ad5f7801a81e56e36705eec3585f28e7209b142ef781b"} Oct 08 06:48:26 crc kubenswrapper[4958]: I1008 06:48:26.015281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b4xgg" event={"ID":"d7953bb6-201d-4470-96d6-f1ffc75ad4a9","Type":"ContainerStarted","Data":"269812ca0e5677a683ebe0713a945054ccab85ab0498097bc67ff160b0162ad2"} Oct 08 06:48:26 crc kubenswrapper[4958]: I1008 06:48:26.015614 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-b4xgg" Oct 08 06:48:26 crc kubenswrapper[4958]: I1008 06:48:26.032076 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-b4xgg" podStartSLOduration=4.032058822 podStartE2EDuration="4.032058822s" podCreationTimestamp="2025-10-08 06:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:48:26.027871649 +0000 UTC m=+849.157564250" watchObservedRunningTime="2025-10-08 06:48:26.032058822 +0000 UTC m=+849.161751413" Oct 08 06:48:31 crc kubenswrapper[4958]: I1008 06:48:31.047125 4958 generic.go:334] "Generic (PLEG): container finished" podID="d47b4d36-5b8e-4da8-8896-a5bb3f88a473" containerID="67658a40267075f7996b633475b94bf94b47042af91f2c4c5c4cd1bbb0e3a756" exitCode=0 Oct 08 06:48:31 crc kubenswrapper[4958]: I1008 06:48:31.047211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerDied","Data":"67658a40267075f7996b633475b94bf94b47042af91f2c4c5c4cd1bbb0e3a756"} Oct 08 06:48:31 crc kubenswrapper[4958]: I1008 06:48:31.050590 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" event={"ID":"8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a","Type":"ContainerStarted","Data":"34c604e2fcd1c092c2b673999d76afb65843a8e25e27b264fd296136ab38413c"} Oct 08 06:48:31 crc kubenswrapper[4958]: I1008 06:48:31.050860 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:31 crc kubenswrapper[4958]: I1008 06:48:31.119494 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" podStartSLOduration=2.605876141 podStartE2EDuration="9.119459876s" podCreationTimestamp="2025-10-08 06:48:22 +0000 UTC" firstStartedPulling="2025-10-08 06:48:23.793250839 +0000 UTC m=+846.922943440" lastFinishedPulling="2025-10-08 06:48:30.306834564 +0000 UTC m=+853.436527175" observedRunningTime="2025-10-08 06:48:31.110460153 +0000 UTC m=+854.240152814" watchObservedRunningTime="2025-10-08 06:48:31.119459876 +0000 UTC m=+854.249152557" Oct 08 06:48:32 crc kubenswrapper[4958]: I1008 06:48:32.057867 4958 generic.go:334] "Generic (PLEG): container finished" podID="d47b4d36-5b8e-4da8-8896-a5bb3f88a473" containerID="0aeaaad8b0ca2169af461bc7ee3bfa3275a9b6705fd7960dc23e7b8d8624c02a" exitCode=0 Oct 08 06:48:32 crc kubenswrapper[4958]: I1008 06:48:32.058176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerDied","Data":"0aeaaad8b0ca2169af461bc7ee3bfa3275a9b6705fd7960dc23e7b8d8624c02a"} Oct 08 06:48:33 crc kubenswrapper[4958]: I1008 06:48:33.039460 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-gtdwt" Oct 08 06:48:33 crc kubenswrapper[4958]: I1008 06:48:33.075290 4958 generic.go:334] "Generic (PLEG): container finished" podID="d47b4d36-5b8e-4da8-8896-a5bb3f88a473" containerID="9484dc5114699418ec997e8332c3321618a7d9919351222d21371ba3e85483bd" exitCode=0 Oct 08 06:48:33 crc kubenswrapper[4958]: I1008 06:48:33.075367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerDied","Data":"9484dc5114699418ec997e8332c3321618a7d9919351222d21371ba3e85483bd"} Oct 08 06:48:34 crc kubenswrapper[4958]: I1008 06:48:34.087385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"23d5938ec8355b9dd05d873dc30749b6eb6751c5dd7fe25186469e695df4128d"} Oct 08 06:48:34 crc kubenswrapper[4958]: I1008 06:48:34.087734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"e43389f7870e3213fea52943a2663d5c0fd3396673f298aa9a1e5b80a53eb1f4"} Oct 08 06:48:34 crc kubenswrapper[4958]: I1008 06:48:34.087748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"9158dd3b242595cc0b5bd546bfd8d14543473507b653df97d6f6d7c6c9f60720"} Oct 08 06:48:34 crc kubenswrapper[4958]: I1008 06:48:34.087759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"ae27c27deb9177aa5d745a36196eb51a66111f95f1bceff295c9ee1abcda4df3"} Oct 08 06:48:34 crc kubenswrapper[4958]: I1008 06:48:34.087771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"6af80ba8c5d984264d0c988ff977d830e3ffae396c76847a6c01b0df78a1274f"} Oct 08 06:48:34 crc kubenswrapper[4958]: I1008 06:48:34.535413 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-b4xgg" Oct 08 06:48:35 crc kubenswrapper[4958]: I1008 06:48:35.108774 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-67rh4" event={"ID":"d47b4d36-5b8e-4da8-8896-a5bb3f88a473","Type":"ContainerStarted","Data":"1035ec0c91df1b866425722c816bde88cb566408b4f9e09b7d9e63c967f86691"} Oct 08 06:48:35 crc kubenswrapper[4958]: I1008 06:48:35.109215 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:35 crc kubenswrapper[4958]: I1008 06:48:35.147301 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-67rh4" podStartSLOduration=5.930637642 podStartE2EDuration="13.147275532s" podCreationTimestamp="2025-10-08 06:48:22 +0000 UTC" firstStartedPulling="2025-10-08 06:48:23.07375395 +0000 UTC m=+846.203446561" lastFinishedPulling="2025-10-08 06:48:30.29039184 +0000 UTC m=+853.420084451" observedRunningTime="2025-10-08 06:48:35.140661924 +0000 UTC m=+858.270354565" watchObservedRunningTime="2025-10-08 06:48:35.147275532 +0000 UTC m=+858.276968163" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.566888 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh"] Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.568173 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.572528 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.585194 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh"] Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.630655 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.630733 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.630976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5rb\" (UniqueName: \"kubernetes.io/projected/3c102b8e-9719-43ed-a225-bd3425249d4e-kube-api-access-mz5rb\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.731988 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.732067 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.732139 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5rb\" (UniqueName: \"kubernetes.io/projected/3c102b8e-9719-43ed-a225-bd3425249d4e-kube-api-access-mz5rb\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.732729 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.732990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.754033 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5rb\" (UniqueName: \"kubernetes.io/projected/3c102b8e-9719-43ed-a225-bd3425249d4e-kube-api-access-mz5rb\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.845780 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.845844 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.845883 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.846443 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c231c00392b517e6cca660a4aec9d278a8fcf4be120c9a0359f28467806c28b5"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.846500 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://c231c00392b517e6cca660a4aec9d278a8fcf4be120c9a0359f28467806c28b5" gracePeriod=600 Oct 08 06:48:36 crc kubenswrapper[4958]: I1008 06:48:36.883407 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:37 crc kubenswrapper[4958]: I1008 06:48:37.143655 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="c231c00392b517e6cca660a4aec9d278a8fcf4be120c9a0359f28467806c28b5" exitCode=0 Oct 08 06:48:37 crc kubenswrapper[4958]: I1008 06:48:37.143828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"c231c00392b517e6cca660a4aec9d278a8fcf4be120c9a0359f28467806c28b5"} Oct 08 06:48:37 crc kubenswrapper[4958]: I1008 06:48:37.143939 4958 scope.go:117] "RemoveContainer" containerID="c3b06042f5aa4fe24bf7e2c05c94bb353cf787b352b0bf1e1e243c939f67b05c" Oct 08 06:48:37 crc kubenswrapper[4958]: I1008 06:48:37.329344 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh"] Oct 08 06:48:37 crc kubenswrapper[4958]: W1008 06:48:37.332561 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c102b8e_9719_43ed_a225_bd3425249d4e.slice/crio-28c193930ee93444c8dd0452a4828195138001848607c300e41a751024ff5037 WatchSource:0}: Error finding container 28c193930ee93444c8dd0452a4828195138001848607c300e41a751024ff5037: Status 404 returned error can't find the container with id 28c193930ee93444c8dd0452a4828195138001848607c300e41a751024ff5037 Oct 08 06:48:37 crc kubenswrapper[4958]: I1008 06:48:37.941626 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:38 crc kubenswrapper[4958]: I1008 06:48:38.006548 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:38 crc kubenswrapper[4958]: I1008 06:48:38.157416 4958 generic.go:334] "Generic (PLEG): container finished" podID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerID="3f71e0ae3afc7174f7b0a96e70f8023ae968e260766a7997cf6324809bf1b926" exitCode=0 Oct 08 06:48:38 crc kubenswrapper[4958]: I1008 06:48:38.157930 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" event={"ID":"3c102b8e-9719-43ed-a225-bd3425249d4e","Type":"ContainerDied","Data":"3f71e0ae3afc7174f7b0a96e70f8023ae968e260766a7997cf6324809bf1b926"} Oct 08 06:48:38 crc kubenswrapper[4958]: I1008 06:48:38.158022 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" event={"ID":"3c102b8e-9719-43ed-a225-bd3425249d4e","Type":"ContainerStarted","Data":"28c193930ee93444c8dd0452a4828195138001848607c300e41a751024ff5037"} Oct 08 06:48:38 crc kubenswrapper[4958]: I1008 06:48:38.167213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"536ded4ebcb3bc7d24c7fc08780da096046aca9fd4294e702291ed048ad523b0"} Oct 08 06:48:42 crc kubenswrapper[4958]: I1008 06:48:42.201277 4958 generic.go:334] "Generic (PLEG): container finished" podID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerID="a52d2dbdc56f866524ceb78bf4726867d21c35c4a644864c623173d850357292" exitCode=0 Oct 08 06:48:42 crc kubenswrapper[4958]: I1008 06:48:42.201545 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" event={"ID":"3c102b8e-9719-43ed-a225-bd3425249d4e","Type":"ContainerDied","Data":"a52d2dbdc56f866524ceb78bf4726867d21c35c4a644864c623173d850357292"} Oct 08 06:48:43 crc kubenswrapper[4958]: I1008 06:48:43.214395 4958 generic.go:334] "Generic (PLEG): container finished" podID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerID="4c83c2450d8ed070b210f8485268f7ec0be1e30f8e61a98bd3cdc76b958c1bcc" exitCode=0 Oct 08 06:48:43 crc kubenswrapper[4958]: I1008 06:48:43.214481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" event={"ID":"3c102b8e-9719-43ed-a225-bd3425249d4e","Type":"ContainerDied","Data":"4c83c2450d8ed070b210f8485268f7ec0be1e30f8e61a98bd3cdc76b958c1bcc"} Oct 08 06:48:43 crc kubenswrapper[4958]: I1008 06:48:43.555439 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-5jj6g" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.503355 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.539509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-util\") pod \"3c102b8e-9719-43ed-a225-bd3425249d4e\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.539688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-bundle\") pod \"3c102b8e-9719-43ed-a225-bd3425249d4e\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.539739 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz5rb\" (UniqueName: \"kubernetes.io/projected/3c102b8e-9719-43ed-a225-bd3425249d4e-kube-api-access-mz5rb\") pod \"3c102b8e-9719-43ed-a225-bd3425249d4e\" (UID: \"3c102b8e-9719-43ed-a225-bd3425249d4e\") " Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.541866 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-bundle" (OuterVolumeSpecName: "bundle") pod "3c102b8e-9719-43ed-a225-bd3425249d4e" (UID: "3c102b8e-9719-43ed-a225-bd3425249d4e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.544922 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c102b8e-9719-43ed-a225-bd3425249d4e-kube-api-access-mz5rb" (OuterVolumeSpecName: "kube-api-access-mz5rb") pod "3c102b8e-9719-43ed-a225-bd3425249d4e" (UID: "3c102b8e-9719-43ed-a225-bd3425249d4e"). InnerVolumeSpecName "kube-api-access-mz5rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.549625 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-util" (OuterVolumeSpecName: "util") pod "3c102b8e-9719-43ed-a225-bd3425249d4e" (UID: "3c102b8e-9719-43ed-a225-bd3425249d4e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.640931 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.641151 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz5rb\" (UniqueName: \"kubernetes.io/projected/3c102b8e-9719-43ed-a225-bd3425249d4e-kube-api-access-mz5rb\") on node \"crc\" DevicePath \"\"" Oct 08 06:48:44 crc kubenswrapper[4958]: I1008 06:48:44.641241 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3c102b8e-9719-43ed-a225-bd3425249d4e-util\") on node \"crc\" DevicePath \"\"" Oct 08 06:48:45 crc kubenswrapper[4958]: I1008 06:48:45.228200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" event={"ID":"3c102b8e-9719-43ed-a225-bd3425249d4e","Type":"ContainerDied","Data":"28c193930ee93444c8dd0452a4828195138001848607c300e41a751024ff5037"} Oct 08 06:48:45 crc kubenswrapper[4958]: I1008 06:48:45.228241 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c193930ee93444c8dd0452a4828195138001848607c300e41a751024ff5037" Oct 08 06:48:45 crc kubenswrapper[4958]: I1008 06:48:45.228360 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.275821 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt"] Oct 08 06:48:48 crc kubenswrapper[4958]: E1008 06:48:48.276494 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="extract" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.276513 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="extract" Oct 08 06:48:48 crc kubenswrapper[4958]: E1008 06:48:48.276541 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="pull" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.276551 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="pull" Oct 08 06:48:48 crc kubenswrapper[4958]: E1008 06:48:48.276583 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="util" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.276592 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="util" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.276803 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c102b8e-9719-43ed-a225-bd3425249d4e" containerName="extract" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.277350 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.281320 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.290411 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.290655 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-gt2xl" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.303415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchz7\" (UniqueName: \"kubernetes.io/projected/ecdc8021-2a34-43dc-b3bc-52b949c5d056-kube-api-access-xchz7\") pod \"cert-manager-operator-controller-manager-57cd46d6d-xz9wt\" (UID: \"ecdc8021-2a34-43dc-b3bc-52b949c5d056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.312288 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt"] Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.405057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchz7\" (UniqueName: \"kubernetes.io/projected/ecdc8021-2a34-43dc-b3bc-52b949c5d056-kube-api-access-xchz7\") pod \"cert-manager-operator-controller-manager-57cd46d6d-xz9wt\" (UID: \"ecdc8021-2a34-43dc-b3bc-52b949c5d056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.428200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchz7\" (UniqueName: \"kubernetes.io/projected/ecdc8021-2a34-43dc-b3bc-52b949c5d056-kube-api-access-xchz7\") pod \"cert-manager-operator-controller-manager-57cd46d6d-xz9wt\" (UID: \"ecdc8021-2a34-43dc-b3bc-52b949c5d056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.600853 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" Oct 08 06:48:48 crc kubenswrapper[4958]: I1008 06:48:48.957045 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt"] Oct 08 06:48:49 crc kubenswrapper[4958]: I1008 06:48:49.256357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" event={"ID":"ecdc8021-2a34-43dc-b3bc-52b949c5d056","Type":"ContainerStarted","Data":"da3f43ec5fdfcadf85db2be06e4ba57c8d9351e278aa121a2cffccfecafda856"} Oct 08 06:48:52 crc kubenswrapper[4958]: I1008 06:48:52.947736 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-67rh4" Oct 08 06:48:56 crc kubenswrapper[4958]: I1008 06:48:56.302477 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" event={"ID":"ecdc8021-2a34-43dc-b3bc-52b949c5d056","Type":"ContainerStarted","Data":"c865c95aa734e394589fd82353372736dcd5cb9df4444fbf6d4c8e6b8a6fc398"} Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.416082 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-57cd46d6d-xz9wt" podStartSLOduration=5.452837012 podStartE2EDuration="12.416064742s" podCreationTimestamp="2025-10-08 06:48:48 +0000 UTC" firstStartedPulling="2025-10-08 06:48:48.974822675 +0000 UTC m=+872.104515276" lastFinishedPulling="2025-10-08 06:48:55.938050385 +0000 UTC m=+879.067743006" observedRunningTime="2025-10-08 06:48:56.335272165 +0000 UTC m=+879.464964806" watchObservedRunningTime="2025-10-08 06:49:00.416064742 +0000 UTC m=+883.545757363" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.416467 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-6qg2f"] Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.417218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.419072 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rbrhw" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.419522 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.419667 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.430620 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-6qg2f"] Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.479036 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57jn\" (UniqueName: \"kubernetes.io/projected/b8278bd4-b875-4f1c-bfd1-48547c6358ac-kube-api-access-z57jn\") pod \"cert-manager-webhook-d969966f-6qg2f\" (UID: \"b8278bd4-b875-4f1c-bfd1-48547c6358ac\") " pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.479385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8278bd4-b875-4f1c-bfd1-48547c6358ac-bound-sa-token\") pod \"cert-manager-webhook-d969966f-6qg2f\" (UID: \"b8278bd4-b875-4f1c-bfd1-48547c6358ac\") " pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.580829 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57jn\" (UniqueName: \"kubernetes.io/projected/b8278bd4-b875-4f1c-bfd1-48547c6358ac-kube-api-access-z57jn\") pod \"cert-manager-webhook-d969966f-6qg2f\" (UID: \"b8278bd4-b875-4f1c-bfd1-48547c6358ac\") " pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.581003 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8278bd4-b875-4f1c-bfd1-48547c6358ac-bound-sa-token\") pod \"cert-manager-webhook-d969966f-6qg2f\" (UID: \"b8278bd4-b875-4f1c-bfd1-48547c6358ac\") " pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.603691 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8278bd4-b875-4f1c-bfd1-48547c6358ac-bound-sa-token\") pod \"cert-manager-webhook-d969966f-6qg2f\" (UID: \"b8278bd4-b875-4f1c-bfd1-48547c6358ac\") " pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.612641 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57jn\" (UniqueName: \"kubernetes.io/projected/b8278bd4-b875-4f1c-bfd1-48547c6358ac-kube-api-access-z57jn\") pod \"cert-manager-webhook-d969966f-6qg2f\" (UID: \"b8278bd4-b875-4f1c-bfd1-48547c6358ac\") " pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.731536 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:00 crc kubenswrapper[4958]: I1008 06:49:00.957660 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-d969966f-6qg2f"] Oct 08 06:49:01 crc kubenswrapper[4958]: I1008 06:49:01.339790 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" event={"ID":"b8278bd4-b875-4f1c-bfd1-48547c6358ac","Type":"ContainerStarted","Data":"b76e6b9a7035bb2f496ee79ef8db02be9934cb13522c509f5340b3ae6d659dd5"} Oct 08 06:49:02 crc kubenswrapper[4958]: I1008 06:49:02.984557 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w"] Oct 08 06:49:02 crc kubenswrapper[4958]: I1008 06:49:02.987181 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:02 crc kubenswrapper[4958]: I1008 06:49:02.989317 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6d9lf" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.005997 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w"] Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.014088 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4f54a9b-c106-4a5c-b785-2097ad0d263a-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-68v2w\" (UID: \"b4f54a9b-c106-4a5c-b785-2097ad0d263a\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.014159 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckd2g\" (UniqueName: \"kubernetes.io/projected/b4f54a9b-c106-4a5c-b785-2097ad0d263a-kube-api-access-ckd2g\") pod \"cert-manager-cainjector-7d9f95dbf-68v2w\" (UID: \"b4f54a9b-c106-4a5c-b785-2097ad0d263a\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.115845 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4f54a9b-c106-4a5c-b785-2097ad0d263a-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-68v2w\" (UID: \"b4f54a9b-c106-4a5c-b785-2097ad0d263a\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.115972 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckd2g\" (UniqueName: \"kubernetes.io/projected/b4f54a9b-c106-4a5c-b785-2097ad0d263a-kube-api-access-ckd2g\") pod \"cert-manager-cainjector-7d9f95dbf-68v2w\" (UID: \"b4f54a9b-c106-4a5c-b785-2097ad0d263a\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.138567 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckd2g\" (UniqueName: \"kubernetes.io/projected/b4f54a9b-c106-4a5c-b785-2097ad0d263a-kube-api-access-ckd2g\") pod \"cert-manager-cainjector-7d9f95dbf-68v2w\" (UID: \"b4f54a9b-c106-4a5c-b785-2097ad0d263a\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.140421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4f54a9b-c106-4a5c-b785-2097ad0d263a-bound-sa-token\") pod \"cert-manager-cainjector-7d9f95dbf-68v2w\" (UID: \"b4f54a9b-c106-4a5c-b785-2097ad0d263a\") " pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.306905 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" Oct 08 06:49:03 crc kubenswrapper[4958]: I1008 06:49:03.729516 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w"] Oct 08 06:49:05 crc kubenswrapper[4958]: I1008 06:49:05.381335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" event={"ID":"b4f54a9b-c106-4a5c-b785-2097ad0d263a","Type":"ContainerStarted","Data":"e69ce3d10c38b9867201d53f5ade36748d31f432ac7b9c4c3c74966b0916b2b1"} Oct 08 06:49:06 crc kubenswrapper[4958]: I1008 06:49:06.391671 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" event={"ID":"b4f54a9b-c106-4a5c-b785-2097ad0d263a","Type":"ContainerStarted","Data":"f7258fcb88753bbb05efc04314027c5267f976ca0fe17a2746a6de113e978ff0"} Oct 08 06:49:06 crc kubenswrapper[4958]: I1008 06:49:06.393619 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" event={"ID":"b8278bd4-b875-4f1c-bfd1-48547c6358ac","Type":"ContainerStarted","Data":"b1ba53aec18d416255f88c830af0b2b87ef0c5ee2611b72c971a33d0db42760d"} Oct 08 06:49:06 crc kubenswrapper[4958]: I1008 06:49:06.393810 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:06 crc kubenswrapper[4958]: I1008 06:49:06.415815 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7d9f95dbf-68v2w" podStartSLOduration=3.613364372 podStartE2EDuration="4.415790399s" podCreationTimestamp="2025-10-08 06:49:02 +0000 UTC" firstStartedPulling="2025-10-08 06:49:05.208042333 +0000 UTC m=+888.337734934" lastFinishedPulling="2025-10-08 06:49:06.01046832 +0000 UTC m=+889.140160961" observedRunningTime="2025-10-08 06:49:06.41062217 +0000 UTC m=+889.540314781" watchObservedRunningTime="2025-10-08 06:49:06.415790399 +0000 UTC m=+889.545483020" Oct 08 06:49:06 crc kubenswrapper[4958]: I1008 06:49:06.460818 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" podStartSLOduration=2.1448343 podStartE2EDuration="6.460795434s" podCreationTimestamp="2025-10-08 06:49:00 +0000 UTC" firstStartedPulling="2025-10-08 06:49:00.983899507 +0000 UTC m=+884.113592148" lastFinishedPulling="2025-10-08 06:49:05.29982818 +0000 UTC m=+888.429553282" observedRunningTime="2025-10-08 06:49:06.457287779 +0000 UTC m=+889.586980410" watchObservedRunningTime="2025-10-08 06:49:06.460795434 +0000 UTC m=+889.590488045" Oct 08 06:49:10 crc kubenswrapper[4958]: I1008 06:49:10.736133 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-d969966f-6qg2f" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.386473 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-9z5df"] Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.388511 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.391727 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xpd4v" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.396370 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-9z5df"] Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.453774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ccba3bb-cf15-4c0d-95db-7bc15614bdc8-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-9z5df\" (UID: \"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8\") " pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.453865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwgkm\" (UniqueName: \"kubernetes.io/projected/0ccba3bb-cf15-4c0d-95db-7bc15614bdc8-kube-api-access-qwgkm\") pod \"cert-manager-7d4cc89fcb-9z5df\" (UID: \"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8\") " pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.555122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ccba3bb-cf15-4c0d-95db-7bc15614bdc8-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-9z5df\" (UID: \"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8\") " pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.555205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwgkm\" (UniqueName: \"kubernetes.io/projected/0ccba3bb-cf15-4c0d-95db-7bc15614bdc8-kube-api-access-qwgkm\") pod \"cert-manager-7d4cc89fcb-9z5df\" (UID: \"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8\") " pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.587891 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwgkm\" (UniqueName: \"kubernetes.io/projected/0ccba3bb-cf15-4c0d-95db-7bc15614bdc8-kube-api-access-qwgkm\") pod \"cert-manager-7d4cc89fcb-9z5df\" (UID: \"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8\") " pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.588861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ccba3bb-cf15-4c0d-95db-7bc15614bdc8-bound-sa-token\") pod \"cert-manager-7d4cc89fcb-9z5df\" (UID: \"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8\") " pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:18 crc kubenswrapper[4958]: I1008 06:49:18.715703 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" Oct 08 06:49:19 crc kubenswrapper[4958]: I1008 06:49:19.221120 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-7d4cc89fcb-9z5df"] Oct 08 06:49:19 crc kubenswrapper[4958]: W1008 06:49:19.231237 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ccba3bb_cf15_4c0d_95db_7bc15614bdc8.slice/crio-da19db4161cc19a4496acec3037e589a5ddce0edd271379957006b87b58b594d WatchSource:0}: Error finding container da19db4161cc19a4496acec3037e589a5ddce0edd271379957006b87b58b594d: Status 404 returned error can't find the container with id da19db4161cc19a4496acec3037e589a5ddce0edd271379957006b87b58b594d Oct 08 06:49:19 crc kubenswrapper[4958]: I1008 06:49:19.494501 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" event={"ID":"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8","Type":"ContainerStarted","Data":"3f59e6491e8a75848675465947c2a632b67f2b22bb8744f0f9661fd064415c08"} Oct 08 06:49:19 crc kubenswrapper[4958]: I1008 06:49:19.494856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" event={"ID":"0ccba3bb-cf15-4c0d-95db-7bc15614bdc8","Type":"ContainerStarted","Data":"da19db4161cc19a4496acec3037e589a5ddce0edd271379957006b87b58b594d"} Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.069686 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-7d4cc89fcb-9z5df" podStartSLOduration=7.069661539 podStartE2EDuration="7.069661539s" podCreationTimestamp="2025-10-08 06:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:49:19.527639085 +0000 UTC m=+902.657331696" watchObservedRunningTime="2025-10-08 06:49:25.069661539 +0000 UTC m=+908.199354180" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.075781 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b7m75"] Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.077137 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.080287 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.080795 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.081297 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-dq4gh" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.096906 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b7m75"] Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.161269 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ttm\" (UniqueName: \"kubernetes.io/projected/56f757b1-f338-45d9-a1c7-82bc8663aa2e-kube-api-access-g2ttm\") pod \"openstack-operator-index-b7m75\" (UID: \"56f757b1-f338-45d9-a1c7-82bc8663aa2e\") " pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.261916 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ttm\" (UniqueName: \"kubernetes.io/projected/56f757b1-f338-45d9-a1c7-82bc8663aa2e-kube-api-access-g2ttm\") pod \"openstack-operator-index-b7m75\" (UID: \"56f757b1-f338-45d9-a1c7-82bc8663aa2e\") " pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.288874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ttm\" (UniqueName: \"kubernetes.io/projected/56f757b1-f338-45d9-a1c7-82bc8663aa2e-kube-api-access-g2ttm\") pod \"openstack-operator-index-b7m75\" (UID: \"56f757b1-f338-45d9-a1c7-82bc8663aa2e\") " pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.424645 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:25 crc kubenswrapper[4958]: I1008 06:49:25.714461 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b7m75"] Oct 08 06:49:25 crc kubenswrapper[4958]: W1008 06:49:25.724692 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56f757b1_f338_45d9_a1c7_82bc8663aa2e.slice/crio-a9e58e5528f6deafe5017f57e54476c9d31a1bae311e1347ba1e0c7cfc4c2335 WatchSource:0}: Error finding container a9e58e5528f6deafe5017f57e54476c9d31a1bae311e1347ba1e0c7cfc4c2335: Status 404 returned error can't find the container with id a9e58e5528f6deafe5017f57e54476c9d31a1bae311e1347ba1e0c7cfc4c2335 Oct 08 06:49:26 crc kubenswrapper[4958]: I1008 06:49:26.550047 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b7m75" event={"ID":"56f757b1-f338-45d9-a1c7-82bc8663aa2e","Type":"ContainerStarted","Data":"a9e58e5528f6deafe5017f57e54476c9d31a1bae311e1347ba1e0c7cfc4c2335"} Oct 08 06:49:27 crc kubenswrapper[4958]: I1008 06:49:27.561071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b7m75" event={"ID":"56f757b1-f338-45d9-a1c7-82bc8663aa2e","Type":"ContainerStarted","Data":"39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f"} Oct 08 06:49:27 crc kubenswrapper[4958]: I1008 06:49:27.607684 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b7m75" podStartSLOduration=1.709516347 podStartE2EDuration="2.607645467s" podCreationTimestamp="2025-10-08 06:49:25 +0000 UTC" firstStartedPulling="2025-10-08 06:49:25.727441992 +0000 UTC m=+908.857134603" lastFinishedPulling="2025-10-08 06:49:26.625571122 +0000 UTC m=+909.755263723" observedRunningTime="2025-10-08 06:49:27.58700916 +0000 UTC m=+910.716701801" watchObservedRunningTime="2025-10-08 06:49:27.607645467 +0000 UTC m=+910.737338138" Oct 08 06:49:28 crc kubenswrapper[4958]: I1008 06:49:28.214422 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b7m75"] Oct 08 06:49:28 crc kubenswrapper[4958]: I1008 06:49:28.826172 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n5wdt"] Oct 08 06:49:28 crc kubenswrapper[4958]: I1008 06:49:28.827871 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:28 crc kubenswrapper[4958]: I1008 06:49:28.843491 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n5wdt"] Oct 08 06:49:28 crc kubenswrapper[4958]: I1008 06:49:28.920803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fhk\" (UniqueName: \"kubernetes.io/projected/3db691c9-ed86-4d49-9508-deb29b0e26ba-kube-api-access-l5fhk\") pod \"openstack-operator-index-n5wdt\" (UID: \"3db691c9-ed86-4d49-9508-deb29b0e26ba\") " pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:29 crc kubenswrapper[4958]: I1008 06:49:29.022939 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fhk\" (UniqueName: \"kubernetes.io/projected/3db691c9-ed86-4d49-9508-deb29b0e26ba-kube-api-access-l5fhk\") pod \"openstack-operator-index-n5wdt\" (UID: \"3db691c9-ed86-4d49-9508-deb29b0e26ba\") " pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:29 crc kubenswrapper[4958]: I1008 06:49:29.048201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fhk\" (UniqueName: \"kubernetes.io/projected/3db691c9-ed86-4d49-9508-deb29b0e26ba-kube-api-access-l5fhk\") pod \"openstack-operator-index-n5wdt\" (UID: \"3db691c9-ed86-4d49-9508-deb29b0e26ba\") " pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:29 crc kubenswrapper[4958]: I1008 06:49:29.166037 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:29 crc kubenswrapper[4958]: I1008 06:49:29.532469 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n5wdt"] Oct 08 06:49:29 crc kubenswrapper[4958]: I1008 06:49:29.580465 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-b7m75" podUID="56f757b1-f338-45d9-a1c7-82bc8663aa2e" containerName="registry-server" containerID="cri-o://39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f" gracePeriod=2 Oct 08 06:49:29 crc kubenswrapper[4958]: I1008 06:49:29.585494 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n5wdt" event={"ID":"3db691c9-ed86-4d49-9508-deb29b0e26ba","Type":"ContainerStarted","Data":"ec398a54d0c5294fc5514d99dfb9ff3dd61f8c727544473e5c7cc8aba9e5370f"} Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.003651 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.036224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2ttm\" (UniqueName: \"kubernetes.io/projected/56f757b1-f338-45d9-a1c7-82bc8663aa2e-kube-api-access-g2ttm\") pod \"56f757b1-f338-45d9-a1c7-82bc8663aa2e\" (UID: \"56f757b1-f338-45d9-a1c7-82bc8663aa2e\") " Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.042231 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f757b1-f338-45d9-a1c7-82bc8663aa2e-kube-api-access-g2ttm" (OuterVolumeSpecName: "kube-api-access-g2ttm") pod "56f757b1-f338-45d9-a1c7-82bc8663aa2e" (UID: "56f757b1-f338-45d9-a1c7-82bc8663aa2e"). InnerVolumeSpecName "kube-api-access-g2ttm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.138151 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2ttm\" (UniqueName: \"kubernetes.io/projected/56f757b1-f338-45d9-a1c7-82bc8663aa2e-kube-api-access-g2ttm\") on node \"crc\" DevicePath \"\"" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.591107 4958 generic.go:334] "Generic (PLEG): container finished" podID="56f757b1-f338-45d9-a1c7-82bc8663aa2e" containerID="39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f" exitCode=0 Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.591190 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b7m75" event={"ID":"56f757b1-f338-45d9-a1c7-82bc8663aa2e","Type":"ContainerDied","Data":"39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f"} Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.591215 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b7m75" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.591495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b7m75" event={"ID":"56f757b1-f338-45d9-a1c7-82bc8663aa2e","Type":"ContainerDied","Data":"a9e58e5528f6deafe5017f57e54476c9d31a1bae311e1347ba1e0c7cfc4c2335"} Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.591586 4958 scope.go:117] "RemoveContainer" containerID="39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.593893 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n5wdt" event={"ID":"3db691c9-ed86-4d49-9508-deb29b0e26ba","Type":"ContainerStarted","Data":"cd2639a360e09f6c2a4271e0dddd0cf713cc8278919490def2554f42f864d78f"} Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.620742 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n5wdt" podStartSLOduration=2.148329007 podStartE2EDuration="2.620695756s" podCreationTimestamp="2025-10-08 06:49:28 +0000 UTC" firstStartedPulling="2025-10-08 06:49:29.546304329 +0000 UTC m=+912.675996940" lastFinishedPulling="2025-10-08 06:49:30.018671048 +0000 UTC m=+913.148363689" observedRunningTime="2025-10-08 06:49:30.61749936 +0000 UTC m=+913.747192001" watchObservedRunningTime="2025-10-08 06:49:30.620695756 +0000 UTC m=+913.750388357" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.636557 4958 scope.go:117] "RemoveContainer" containerID="39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f" Oct 08 06:49:30 crc kubenswrapper[4958]: E1008 06:49:30.640604 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f\": container with ID starting with 39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f not found: ID does not exist" containerID="39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.640695 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f"} err="failed to get container status \"39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f\": rpc error: code = NotFound desc = could not find container \"39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f\": container with ID starting with 39e9dc2ca0c3eab1e44dc0f194b2104a1d6051de80d9f2053616a8b398588e5f not found: ID does not exist" Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.659357 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-b7m75"] Oct 08 06:49:30 crc kubenswrapper[4958]: I1008 06:49:30.664374 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-b7m75"] Oct 08 06:49:31 crc kubenswrapper[4958]: I1008 06:49:31.590335 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f757b1-f338-45d9-a1c7-82bc8663aa2e" path="/var/lib/kubelet/pods/56f757b1-f338-45d9-a1c7-82bc8663aa2e/volumes" Oct 08 06:49:39 crc kubenswrapper[4958]: I1008 06:49:39.166871 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:39 crc kubenswrapper[4958]: I1008 06:49:39.167708 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:39 crc kubenswrapper[4958]: I1008 06:49:39.204526 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:39 crc kubenswrapper[4958]: I1008 06:49:39.708889 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-n5wdt" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.681534 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn"] Oct 08 06:49:41 crc kubenswrapper[4958]: E1008 06:49:41.682260 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f757b1-f338-45d9-a1c7-82bc8663aa2e" containerName="registry-server" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.682282 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f757b1-f338-45d9-a1c7-82bc8663aa2e" containerName="registry-server" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.682505 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f757b1-f338-45d9-a1c7-82bc8663aa2e" containerName="registry-server" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.683852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.691018 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-27l89" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.696215 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn"] Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.722848 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-util\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.722980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5s9\" (UniqueName: \"kubernetes.io/projected/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-kube-api-access-wc5s9\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.723220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-bundle\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.824802 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc5s9\" (UniqueName: \"kubernetes.io/projected/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-kube-api-access-wc5s9\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.824901 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-bundle\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.824979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-util\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.825578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-bundle\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.825670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-util\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:41 crc kubenswrapper[4958]: I1008 06:49:41.850638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc5s9\" (UniqueName: \"kubernetes.io/projected/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-kube-api-access-wc5s9\") pod \"36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:42 crc kubenswrapper[4958]: I1008 06:49:42.015693 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:42 crc kubenswrapper[4958]: I1008 06:49:42.483225 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn"] Oct 08 06:49:42 crc kubenswrapper[4958]: W1008 06:49:42.492797 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e5b8dc_c42b_4e05_a9e3_ce4564b36035.slice/crio-cf64179797b5d083a498eb290d0ca5637f0c66124dc43bdff846b297395262fc WatchSource:0}: Error finding container cf64179797b5d083a498eb290d0ca5637f0c66124dc43bdff846b297395262fc: Status 404 returned error can't find the container with id cf64179797b5d083a498eb290d0ca5637f0c66124dc43bdff846b297395262fc Oct 08 06:49:42 crc kubenswrapper[4958]: I1008 06:49:42.693052 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" event={"ID":"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035","Type":"ContainerStarted","Data":"cf64179797b5d083a498eb290d0ca5637f0c66124dc43bdff846b297395262fc"} Oct 08 06:49:43 crc kubenswrapper[4958]: I1008 06:49:43.703175 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerID="c4992c5e4202594fef351531db0522a53b8c2aa498dd4af5478d611642847faa" exitCode=0 Oct 08 06:49:43 crc kubenswrapper[4958]: I1008 06:49:43.703284 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" event={"ID":"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035","Type":"ContainerDied","Data":"c4992c5e4202594fef351531db0522a53b8c2aa498dd4af5478d611642847faa"} Oct 08 06:49:44 crc kubenswrapper[4958]: I1008 06:49:44.732266 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerID="475acd63ca44c04f637e5a01f5349a5b2f4dc939d1458e29df6c859ece304a1b" exitCode=0 Oct 08 06:49:44 crc kubenswrapper[4958]: I1008 06:49:44.732370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" event={"ID":"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035","Type":"ContainerDied","Data":"475acd63ca44c04f637e5a01f5349a5b2f4dc939d1458e29df6c859ece304a1b"} Oct 08 06:49:45 crc kubenswrapper[4958]: I1008 06:49:45.744727 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerID="f969ff572bf7037fd26ef755f193e5809ea08bd42e5a2abbfc4bddbe3e500d8e" exitCode=0 Oct 08 06:49:45 crc kubenswrapper[4958]: I1008 06:49:45.744787 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" event={"ID":"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035","Type":"ContainerDied","Data":"f969ff572bf7037fd26ef755f193e5809ea08bd42e5a2abbfc4bddbe3e500d8e"} Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.072083 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.217916 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-util\") pod \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.218033 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-bundle\") pod \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.218078 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc5s9\" (UniqueName: \"kubernetes.io/projected/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-kube-api-access-wc5s9\") pod \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\" (UID: \"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035\") " Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.219161 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-bundle" (OuterVolumeSpecName: "bundle") pod "c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" (UID: "c8e5b8dc-c42b-4e05-a9e3-ce4564b36035"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.227158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-kube-api-access-wc5s9" (OuterVolumeSpecName: "kube-api-access-wc5s9") pod "c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" (UID: "c8e5b8dc-c42b-4e05-a9e3-ce4564b36035"). InnerVolumeSpecName "kube-api-access-wc5s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.251073 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-util" (OuterVolumeSpecName: "util") pod "c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" (UID: "c8e5b8dc-c42b-4e05-a9e3-ce4564b36035"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.320026 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-util\") on node \"crc\" DevicePath \"\"" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.320078 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.320099 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc5s9\" (UniqueName: \"kubernetes.io/projected/c8e5b8dc-c42b-4e05-a9e3-ce4564b36035-kube-api-access-wc5s9\") on node \"crc\" DevicePath \"\"" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.774147 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" event={"ID":"c8e5b8dc-c42b-4e05-a9e3-ce4564b36035","Type":"ContainerDied","Data":"cf64179797b5d083a498eb290d0ca5637f0c66124dc43bdff846b297395262fc"} Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.774201 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf64179797b5d083a498eb290d0ca5637f0c66124dc43bdff846b297395262fc" Oct 08 06:49:47 crc kubenswrapper[4958]: I1008 06:49:47.774246 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.584158 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b"] Oct 08 06:49:55 crc kubenswrapper[4958]: E1008 06:49:55.585017 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="util" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.585034 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="util" Oct 08 06:49:55 crc kubenswrapper[4958]: E1008 06:49:55.585056 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="pull" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.585064 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="pull" Oct 08 06:49:55 crc kubenswrapper[4958]: E1008 06:49:55.585084 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="extract" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.585092 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="extract" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.585235 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e5b8dc-c42b-4e05-a9e3-ce4564b36035" containerName="extract" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.586031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.589230 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dvbnm" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.623228 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b"] Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.660983 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlpd\" (UniqueName: \"kubernetes.io/projected/d19e23b3-612a-41d0-9bf1-d3dc773d692d-kube-api-access-bvlpd\") pod \"openstack-operator-controller-operator-b6d857f89-4vq4b\" (UID: \"d19e23b3-612a-41d0-9bf1-d3dc773d692d\") " pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.762415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlpd\" (UniqueName: \"kubernetes.io/projected/d19e23b3-612a-41d0-9bf1-d3dc773d692d-kube-api-access-bvlpd\") pod \"openstack-operator-controller-operator-b6d857f89-4vq4b\" (UID: \"d19e23b3-612a-41d0-9bf1-d3dc773d692d\") " pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.789845 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlpd\" (UniqueName: \"kubernetes.io/projected/d19e23b3-612a-41d0-9bf1-d3dc773d692d-kube-api-access-bvlpd\") pod \"openstack-operator-controller-operator-b6d857f89-4vq4b\" (UID: \"d19e23b3-612a-41d0-9bf1-d3dc773d692d\") " pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:49:55 crc kubenswrapper[4958]: I1008 06:49:55.905568 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:49:56 crc kubenswrapper[4958]: I1008 06:49:56.114866 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b"] Oct 08 06:49:56 crc kubenswrapper[4958]: I1008 06:49:56.840676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" event={"ID":"d19e23b3-612a-41d0-9bf1-d3dc773d692d","Type":"ContainerStarted","Data":"e72606982fb1d874f85375ccaf48d9030095a7f37d3a16a3148bc18b5ef84527"} Oct 08 06:50:00 crc kubenswrapper[4958]: I1008 06:50:00.881094 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" event={"ID":"d19e23b3-612a-41d0-9bf1-d3dc773d692d","Type":"ContainerStarted","Data":"dbceba5ed0972673564f0a113fa0c6a342a018cc9b79258b0d564fcd76cf6799"} Oct 08 06:50:03 crc kubenswrapper[4958]: I1008 06:50:03.913816 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" event={"ID":"d19e23b3-612a-41d0-9bf1-d3dc773d692d","Type":"ContainerStarted","Data":"60cc36ee35270c20e945bf703c235a8b5e138d68cb44fe175093009c128ffe48"} Oct 08 06:50:03 crc kubenswrapper[4958]: I1008 06:50:03.914711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:50:03 crc kubenswrapper[4958]: I1008 06:50:03.983362 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" podStartSLOduration=2.095563399 podStartE2EDuration="8.983336154s" podCreationTimestamp="2025-10-08 06:49:55 +0000 UTC" firstStartedPulling="2025-10-08 06:49:56.126260918 +0000 UTC m=+939.255953509" lastFinishedPulling="2025-10-08 06:50:03.014033663 +0000 UTC m=+946.143726264" observedRunningTime="2025-10-08 06:50:03.968285567 +0000 UTC m=+947.097978248" watchObservedRunningTime="2025-10-08 06:50:03.983336154 +0000 UTC m=+947.113028795" Oct 08 06:50:05 crc kubenswrapper[4958]: I1008 06:50:05.909131 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-b6d857f89-4vq4b" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.013304 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5lwfp"] Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.018038 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.043194 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lwfp"] Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.043299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vg9r\" (UniqueName: \"kubernetes.io/projected/5fad3625-6c81-4040-976f-a91d6fb91dde-kube-api-access-4vg9r\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.043484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-utilities\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.043589 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-catalog-content\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.145929 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vg9r\" (UniqueName: \"kubernetes.io/projected/5fad3625-6c81-4040-976f-a91d6fb91dde-kube-api-access-4vg9r\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.146563 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-utilities\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.146621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-catalog-content\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.147373 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-catalog-content\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.148158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-utilities\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.169394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vg9r\" (UniqueName: \"kubernetes.io/projected/5fad3625-6c81-4040-976f-a91d6fb91dde-kube-api-access-4vg9r\") pod \"redhat-marketplace-5lwfp\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.355554 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.569839 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lwfp"] Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.950478 4958 generic.go:334] "Generic (PLEG): container finished" podID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerID="ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c" exitCode=0 Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.950530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lwfp" event={"ID":"5fad3625-6c81-4040-976f-a91d6fb91dde","Type":"ContainerDied","Data":"ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c"} Oct 08 06:50:08 crc kubenswrapper[4958]: I1008 06:50:08.950581 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lwfp" event={"ID":"5fad3625-6c81-4040-976f-a91d6fb91dde","Type":"ContainerStarted","Data":"61f3bb9f89fc405dfbcd025447503448e1fbff9e52876049db42febbd57ab78d"} Oct 08 06:50:09 crc kubenswrapper[4958]: I1008 06:50:09.963509 4958 generic.go:334] "Generic (PLEG): container finished" podID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerID="e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638" exitCode=0 Oct 08 06:50:09 crc kubenswrapper[4958]: I1008 06:50:09.963593 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lwfp" event={"ID":"5fad3625-6c81-4040-976f-a91d6fb91dde","Type":"ContainerDied","Data":"e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638"} Oct 08 06:50:10 crc kubenswrapper[4958]: I1008 06:50:10.975211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lwfp" event={"ID":"5fad3625-6c81-4040-976f-a91d6fb91dde","Type":"ContainerStarted","Data":"de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a"} Oct 08 06:50:11 crc kubenswrapper[4958]: I1008 06:50:11.004018 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5lwfp" podStartSLOduration=2.537021773 podStartE2EDuration="4.003999085s" podCreationTimestamp="2025-10-08 06:50:07 +0000 UTC" firstStartedPulling="2025-10-08 06:50:08.952144887 +0000 UTC m=+952.081837498" lastFinishedPulling="2025-10-08 06:50:10.419122179 +0000 UTC m=+953.548814810" observedRunningTime="2025-10-08 06:50:10.999870833 +0000 UTC m=+954.129563474" watchObservedRunningTime="2025-10-08 06:50:11.003999085 +0000 UTC m=+954.133691686" Oct 08 06:50:18 crc kubenswrapper[4958]: I1008 06:50:18.356597 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:18 crc kubenswrapper[4958]: I1008 06:50:18.357239 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:18 crc kubenswrapper[4958]: I1008 06:50:18.421647 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:19 crc kubenswrapper[4958]: I1008 06:50:19.107161 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:19 crc kubenswrapper[4958]: I1008 06:50:19.225047 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lwfp"] Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.048339 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5lwfp" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="registry-server" containerID="cri-o://de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a" gracePeriod=2 Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.483400 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.570233 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-utilities\") pod \"5fad3625-6c81-4040-976f-a91d6fb91dde\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.570298 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-catalog-content\") pod \"5fad3625-6c81-4040-976f-a91d6fb91dde\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.570348 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vg9r\" (UniqueName: \"kubernetes.io/projected/5fad3625-6c81-4040-976f-a91d6fb91dde-kube-api-access-4vg9r\") pod \"5fad3625-6c81-4040-976f-a91d6fb91dde\" (UID: \"5fad3625-6c81-4040-976f-a91d6fb91dde\") " Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.571558 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-utilities" (OuterVolumeSpecName: "utilities") pod "5fad3625-6c81-4040-976f-a91d6fb91dde" (UID: "5fad3625-6c81-4040-976f-a91d6fb91dde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.588205 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fad3625-6c81-4040-976f-a91d6fb91dde" (UID: "5fad3625-6c81-4040-976f-a91d6fb91dde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.588811 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fad3625-6c81-4040-976f-a91d6fb91dde-kube-api-access-4vg9r" (OuterVolumeSpecName: "kube-api-access-4vg9r") pod "5fad3625-6c81-4040-976f-a91d6fb91dde" (UID: "5fad3625-6c81-4040-976f-a91d6fb91dde"). InnerVolumeSpecName "kube-api-access-4vg9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.672592 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.673200 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fad3625-6c81-4040-976f-a91d6fb91dde-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:50:21 crc kubenswrapper[4958]: I1008 06:50:21.673319 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vg9r\" (UniqueName: \"kubernetes.io/projected/5fad3625-6c81-4040-976f-a91d6fb91dde-kube-api-access-4vg9r\") on node \"crc\" DevicePath \"\"" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.058714 4958 generic.go:334] "Generic (PLEG): container finished" podID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerID="de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a" exitCode=0 Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.058797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lwfp" event={"ID":"5fad3625-6c81-4040-976f-a91d6fb91dde","Type":"ContainerDied","Data":"de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a"} Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.058844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lwfp" event={"ID":"5fad3625-6c81-4040-976f-a91d6fb91dde","Type":"ContainerDied","Data":"61f3bb9f89fc405dfbcd025447503448e1fbff9e52876049db42febbd57ab78d"} Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.058878 4958 scope.go:117] "RemoveContainer" containerID="de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.060053 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lwfp" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.088309 4958 scope.go:117] "RemoveContainer" containerID="e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.121588 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lwfp"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.133109 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lwfp"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.142774 4958 scope.go:117] "RemoveContainer" containerID="ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.162863 4958 scope.go:117] "RemoveContainer" containerID="de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a" Oct 08 06:50:22 crc kubenswrapper[4958]: E1008 06:50:22.163475 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a\": container with ID starting with de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a not found: ID does not exist" containerID="de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.163509 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a"} err="failed to get container status \"de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a\": rpc error: code = NotFound desc = could not find container \"de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a\": container with ID starting with de1186082fab60fb04f335874e29de30156f0040feabce8e72ac621740f4d71a not found: ID does not exist" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.163533 4958 scope.go:117] "RemoveContainer" containerID="e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638" Oct 08 06:50:22 crc kubenswrapper[4958]: E1008 06:50:22.163879 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638\": container with ID starting with e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638 not found: ID does not exist" containerID="e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.163924 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638"} err="failed to get container status \"e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638\": rpc error: code = NotFound desc = could not find container \"e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638\": container with ID starting with e0475b076deec5f805aa5d96bbfa872478b6c5ffff17a9973c523fc787cf2638 not found: ID does not exist" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.163986 4958 scope.go:117] "RemoveContainer" containerID="ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c" Oct 08 06:50:22 crc kubenswrapper[4958]: E1008 06:50:22.164627 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c\": container with ID starting with ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c not found: ID does not exist" containerID="ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.164657 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c"} err="failed to get container status \"ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c\": rpc error: code = NotFound desc = could not find container \"ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c\": container with ID starting with ce461622e36561cda98e2cb540c4fa43e5bbc346b1bfee0f624a18701f59d12c not found: ID does not exist" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.471199 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx"] Oct 08 06:50:22 crc kubenswrapper[4958]: E1008 06:50:22.471526 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="registry-server" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.471545 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="registry-server" Oct 08 06:50:22 crc kubenswrapper[4958]: E1008 06:50:22.471576 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="extract-utilities" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.471584 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="extract-utilities" Oct 08 06:50:22 crc kubenswrapper[4958]: E1008 06:50:22.471602 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="extract-content" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.471609 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="extract-content" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.471737 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" containerName="registry-server" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.472551 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.475479 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zq4jd" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.480660 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.482303 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.486840 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.490506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bvb7z" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.500968 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.502137 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.504339 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fhhbh" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.506631 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.527379 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.528286 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.531212 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j92gl" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.531445 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.538036 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.549326 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.550494 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.554329 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qfwxf" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.566435 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.567546 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.572183 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-s8h25" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.581361 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.586096 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7vr9\" (UniqueName: \"kubernetes.io/projected/ac064bd6-8d20-4224-b54f-e074bff95072-kube-api-access-s7vr9\") pod \"designate-operator-controller-manager-58d86cd59d-f8h2l\" (UID: \"ac064bd6-8d20-4224-b54f-e074bff95072\") " pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.586131 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkl7k\" (UniqueName: \"kubernetes.io/projected/37de4c94-71ef-4563-915b-468370179903-kube-api-access-dkl7k\") pod \"cinder-operator-controller-manager-84bd8f6848-zh97x\" (UID: \"37de4c94-71ef-4563-915b-468370179903\") " pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.586242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65kj\" (UniqueName: \"kubernetes.io/projected/abbaa69c-2318-4087-a167-0bbe69928971-kube-api-access-c65kj\") pod \"barbican-operator-controller-manager-64f56ff694-6fjfx\" (UID: \"abbaa69c-2318-4087-a167-0bbe69928971\") " pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.597587 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.598475 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.600603 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.600801 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8nrqp" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.608623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.617871 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.618881 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.621275 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mtwvf" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.628255 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.629113 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.653002 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.666511 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-plbvm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.688728 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.689874 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89bhn\" (UniqueName: \"kubernetes.io/projected/28d81d52-ba79-4c62-95a7-f5a1e48b8dda-kube-api-access-89bhn\") pod \"infra-operator-controller-manager-84788b6bc5-6pggm\" (UID: \"28d81d52-ba79-4c62-95a7-f5a1e48b8dda\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65kj\" (UniqueName: \"kubernetes.io/projected/abbaa69c-2318-4087-a167-0bbe69928971-kube-api-access-c65kj\") pod \"barbican-operator-controller-manager-64f56ff694-6fjfx\" (UID: \"abbaa69c-2318-4087-a167-0bbe69928971\") " pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693685 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7vr9\" (UniqueName: \"kubernetes.io/projected/ac064bd6-8d20-4224-b54f-e074bff95072-kube-api-access-s7vr9\") pod \"designate-operator-controller-manager-58d86cd59d-f8h2l\" (UID: \"ac064bd6-8d20-4224-b54f-e074bff95072\") " pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693721 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6rc\" (UniqueName: \"kubernetes.io/projected/8890b7c7-8aba-485e-85db-ee154714c358-kube-api-access-pm6rc\") pod \"heat-operator-controller-manager-7ccfc8cf49-wmvbd\" (UID: \"8890b7c7-8aba-485e-85db-ee154714c358\") " pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693757 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkl7k\" (UniqueName: \"kubernetes.io/projected/37de4c94-71ef-4563-915b-468370179903-kube-api-access-dkl7k\") pod \"cinder-operator-controller-manager-84bd8f6848-zh97x\" (UID: \"37de4c94-71ef-4563-915b-468370179903\") " pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693854 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v95w\" (UniqueName: \"kubernetes.io/projected/84708733-8897-4752-8533-5463ce01d265-kube-api-access-9v95w\") pod \"glance-operator-controller-manager-fd648f65-7l7c8\" (UID: \"84708733-8897-4752-8533-5463ce01d265\") " pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693924 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d81d52-ba79-4c62-95a7-f5a1e48b8dda-cert\") pod \"infra-operator-controller-manager-84788b6bc5-6pggm\" (UID: \"28d81d52-ba79-4c62-95a7-f5a1e48b8dda\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.693981 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxtc\" (UniqueName: \"kubernetes.io/projected/d6b4df20-5cc4-49a7-b124-ac88e068f9a0-kube-api-access-gpxtc\") pod \"horizon-operator-controller-manager-5b477879bc-l8tj8\" (UID: \"d6b4df20-5cc4-49a7-b124-ac88e068f9a0\") " pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.694044 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchql\" (UniqueName: \"kubernetes.io/projected/351db139-bb78-4975-a6c1-ceb4904347f0-kube-api-access-bchql\") pod \"ironic-operator-controller-manager-5467f8988c-wklfs\" (UID: \"351db139-bb78-4975-a6c1-ceb4904347f0\") " pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.707585 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.724867 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkl7k\" (UniqueName: \"kubernetes.io/projected/37de4c94-71ef-4563-915b-468370179903-kube-api-access-dkl7k\") pod \"cinder-operator-controller-manager-84bd8f6848-zh97x\" (UID: \"37de4c94-71ef-4563-915b-468370179903\") " pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.730425 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.730531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7vr9\" (UniqueName: \"kubernetes.io/projected/ac064bd6-8d20-4224-b54f-e074bff95072-kube-api-access-s7vr9\") pod \"designate-operator-controller-manager-58d86cd59d-f8h2l\" (UID: \"ac064bd6-8d20-4224-b54f-e074bff95072\") " pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.732093 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.733776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65kj\" (UniqueName: \"kubernetes.io/projected/abbaa69c-2318-4087-a167-0bbe69928971-kube-api-access-c65kj\") pod \"barbican-operator-controller-manager-64f56ff694-6fjfx\" (UID: \"abbaa69c-2318-4087-a167-0bbe69928971\") " pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.739441 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zxtjx" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.742668 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.743640 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.746722 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-66cch" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.752102 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.757827 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.778494 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.779854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.782263 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.783788 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-k2lmq" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.784111 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.785796 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mhg5c" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.788730 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.790750 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.800343 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803109 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v95w\" (UniqueName: \"kubernetes.io/projected/84708733-8897-4752-8533-5463ce01d265-kube-api-access-9v95w\") pod \"glance-operator-controller-manager-fd648f65-7l7c8\" (UID: \"84708733-8897-4752-8533-5463ce01d265\") " pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803141 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d81d52-ba79-4c62-95a7-f5a1e48b8dda-cert\") pod \"infra-operator-controller-manager-84788b6bc5-6pggm\" (UID: \"28d81d52-ba79-4c62-95a7-f5a1e48b8dda\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpxtc\" (UniqueName: \"kubernetes.io/projected/d6b4df20-5cc4-49a7-b124-ac88e068f9a0-kube-api-access-gpxtc\") pod \"horizon-operator-controller-manager-5b477879bc-l8tj8\" (UID: \"d6b4df20-5cc4-49a7-b124-ac88e068f9a0\") " pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bchql\" (UniqueName: \"kubernetes.io/projected/351db139-bb78-4975-a6c1-ceb4904347f0-kube-api-access-bchql\") pod \"ironic-operator-controller-manager-5467f8988c-wklfs\" (UID: \"351db139-bb78-4975-a6c1-ceb4904347f0\") " pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803228 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ppq\" (UniqueName: \"kubernetes.io/projected/e503c39d-8eed-4db8-ad49-9a78f7c2bfa2-kube-api-access-c6ppq\") pod \"manila-operator-controller-manager-7cb48dbc-qftvm\" (UID: \"e503c39d-8eed-4db8-ad49-9a78f7c2bfa2\") " pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7j86\" (UniqueName: \"kubernetes.io/projected/60681278-f71a-4ec0-a572-e6c05783791c-kube-api-access-q7j86\") pod \"keystone-operator-controller-manager-5b84cc7657-v9mbt\" (UID: \"60681278-f71a-4ec0-a572-e6c05783791c\") " pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803286 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89bhn\" (UniqueName: \"kubernetes.io/projected/28d81d52-ba79-4c62-95a7-f5a1e48b8dda-kube-api-access-89bhn\") pod \"infra-operator-controller-manager-84788b6bc5-6pggm\" (UID: \"28d81d52-ba79-4c62-95a7-f5a1e48b8dda\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.803319 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6rc\" (UniqueName: \"kubernetes.io/projected/8890b7c7-8aba-485e-85db-ee154714c358-kube-api-access-pm6rc\") pod \"heat-operator-controller-manager-7ccfc8cf49-wmvbd\" (UID: \"8890b7c7-8aba-485e-85db-ee154714c358\") " pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.809016 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.821229 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.824476 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d81d52-ba79-4c62-95a7-f5a1e48b8dda-cert\") pod \"infra-operator-controller-manager-84788b6bc5-6pggm\" (UID: \"28d81d52-ba79-4c62-95a7-f5a1e48b8dda\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.827438 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpxtc\" (UniqueName: \"kubernetes.io/projected/d6b4df20-5cc4-49a7-b124-ac88e068f9a0-kube-api-access-gpxtc\") pod \"horizon-operator-controller-manager-5b477879bc-l8tj8\" (UID: \"d6b4df20-5cc4-49a7-b124-ac88e068f9a0\") " pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.827496 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.828407 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v95w\" (UniqueName: \"kubernetes.io/projected/84708733-8897-4752-8533-5463ce01d265-kube-api-access-9v95w\") pod \"glance-operator-controller-manager-fd648f65-7l7c8\" (UID: \"84708733-8897-4752-8533-5463ce01d265\") " pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.833781 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6rc\" (UniqueName: \"kubernetes.io/projected/8890b7c7-8aba-485e-85db-ee154714c358-kube-api-access-pm6rc\") pod \"heat-operator-controller-manager-7ccfc8cf49-wmvbd\" (UID: \"8890b7c7-8aba-485e-85db-ee154714c358\") " pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.834323 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bchql\" (UniqueName: \"kubernetes.io/projected/351db139-bb78-4975-a6c1-ceb4904347f0-kube-api-access-bchql\") pod \"ironic-operator-controller-manager-5467f8988c-wklfs\" (UID: \"351db139-bb78-4975-a6c1-ceb4904347f0\") " pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.836538 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.837009 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.838371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.848178 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dd962" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.848416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89bhn\" (UniqueName: \"kubernetes.io/projected/28d81d52-ba79-4c62-95a7-f5a1e48b8dda-kube-api-access-89bhn\") pod \"infra-operator-controller-manager-84788b6bc5-6pggm\" (UID: \"28d81d52-ba79-4c62-95a7-f5a1e48b8dda\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.848452 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-rxnfm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.854513 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.860610 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.863893 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.872981 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.875578 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.876707 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.880229 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4zkh9" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.892695 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.893231 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.894004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.897697 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-chk8t" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.897889 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904166 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrjg\" (UniqueName: \"kubernetes.io/projected/b60c40ec-ea4e-445c-8561-859cd7cd94de-kube-api-access-pnrjg\") pod \"nova-operator-controller-manager-6c9b57c67-f58xh\" (UID: \"b60c40ec-ea4e-445c-8561-859cd7cd94de\") " pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904202 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b94k\" (UniqueName: \"kubernetes.io/projected/5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92-kube-api-access-2b94k\") pod \"octavia-operator-controller-manager-69f59f9d8-xbgdv\" (UID: \"5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92\") " pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904234 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s6f5\" (UniqueName: \"kubernetes.io/projected/1f44d497-1eb5-40cd-9026-30d623318705-kube-api-access-6s6f5\") pod \"neutron-operator-controller-manager-69b956fbf6-zc5vg\" (UID: \"1f44d497-1eb5-40cd-9026-30d623318705\") " pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904267 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ppq\" (UniqueName: \"kubernetes.io/projected/e503c39d-8eed-4db8-ad49-9a78f7c2bfa2-kube-api-access-c6ppq\") pod \"manila-operator-controller-manager-7cb48dbc-qftvm\" (UID: \"e503c39d-8eed-4db8-ad49-9a78f7c2bfa2\") " pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7j86\" (UniqueName: \"kubernetes.io/projected/60681278-f71a-4ec0-a572-e6c05783791c-kube-api-access-q7j86\") pod \"keystone-operator-controller-manager-5b84cc7657-v9mbt\" (UID: \"60681278-f71a-4ec0-a572-e6c05783791c\") " pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904359 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kns95\" (UniqueName: \"kubernetes.io/projected/0443a624-6fd7-4b74-8e9c-7a1851459790-kube-api-access-kns95\") pod \"mariadb-operator-controller-manager-d6c9dc5bc-fhpnh\" (UID: \"0443a624-6fd7-4b74-8e9c-7a1851459790\") " pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.904391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4pf\" (UniqueName: \"kubernetes.io/projected/b71b7f35-8fba-4da1-83f9-4b7c08b15990-kube-api-access-cw4pf\") pod \"ovn-operator-controller-manager-54d485fd9-98gw2\" (UID: \"b71b7f35-8fba-4da1-83f9-4b7c08b15990\") " pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.909231 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.912339 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.913677 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.915867 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.921130 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qn7r5" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.923341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.924187 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.929204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ppq\" (UniqueName: \"kubernetes.io/projected/e503c39d-8eed-4db8-ad49-9a78f7c2bfa2-kube-api-access-c6ppq\") pod \"manila-operator-controller-manager-7cb48dbc-qftvm\" (UID: \"e503c39d-8eed-4db8-ad49-9a78f7c2bfa2\") " pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.935184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7j86\" (UniqueName: \"kubernetes.io/projected/60681278-f71a-4ec0-a572-e6c05783791c-kube-api-access-q7j86\") pod \"keystone-operator-controller-manager-5b84cc7657-v9mbt\" (UID: \"60681278-f71a-4ec0-a572-e6c05783791c\") " pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.939466 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.941056 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.948592 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.961712 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.966627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.975005 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc"] Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.975419 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xszdt" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.976896 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.978053 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8snxf" Oct 08 06:50:22 crc kubenswrapper[4958]: I1008 06:50:22.988551 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrjg\" (UniqueName: \"kubernetes.io/projected/b60c40ec-ea4e-445c-8561-859cd7cd94de-kube-api-access-pnrjg\") pod \"nova-operator-controller-manager-6c9b57c67-f58xh\" (UID: \"b60c40ec-ea4e-445c-8561-859cd7cd94de\") " pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b94k\" (UniqueName: \"kubernetes.io/projected/5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92-kube-api-access-2b94k\") pod \"octavia-operator-controller-manager-69f59f9d8-xbgdv\" (UID: \"5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92\") " pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005613 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be960fcf-8121-4145-8548-a1a46dc9f8bb-cert\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s6f5\" (UniqueName: \"kubernetes.io/projected/1f44d497-1eb5-40cd-9026-30d623318705-kube-api-access-6s6f5\") pod \"neutron-operator-controller-manager-69b956fbf6-zc5vg\" (UID: \"1f44d497-1eb5-40cd-9026-30d623318705\") " pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005669 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cg4m\" (UniqueName: \"kubernetes.io/projected/be960fcf-8121-4145-8548-a1a46dc9f8bb-kube-api-access-8cg4m\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005708 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttm2j\" (UniqueName: \"kubernetes.io/projected/88ec1ec0-d81b-4b01-a299-7980c8fbb961-kube-api-access-ttm2j\") pod \"telemetry-operator-controller-manager-f589c7597-p9q6v\" (UID: \"88ec1ec0-d81b-4b01-a299-7980c8fbb961\") " pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005737 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vx6b\" (UniqueName: \"kubernetes.io/projected/48aee074-1139-426c-a04c-9f65ccb3ccde-kube-api-access-7vx6b\") pod \"placement-operator-controller-manager-66f6d6849b-j85xs\" (UID: \"48aee074-1139-426c-a04c-9f65ccb3ccde\") " pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kns95\" (UniqueName: \"kubernetes.io/projected/0443a624-6fd7-4b74-8e9c-7a1851459790-kube-api-access-kns95\") pod \"mariadb-operator-controller-manager-d6c9dc5bc-fhpnh\" (UID: \"0443a624-6fd7-4b74-8e9c-7a1851459790\") " pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005788 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj8jt\" (UniqueName: \"kubernetes.io/projected/8140f9b7-f1fe-4151-a453-ff7990ee085b-kube-api-access-cj8jt\") pod \"swift-operator-controller-manager-76d5577b-pgq5t\" (UID: \"8140f9b7-f1fe-4151-a453-ff7990ee085b\") " pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.005807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw4pf\" (UniqueName: \"kubernetes.io/projected/b71b7f35-8fba-4da1-83f9-4b7c08b15990-kube-api-access-cw4pf\") pod \"ovn-operator-controller-manager-54d485fd9-98gw2\" (UID: \"b71b7f35-8fba-4da1-83f9-4b7c08b15990\") " pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.020781 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b94k\" (UniqueName: \"kubernetes.io/projected/5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92-kube-api-access-2b94k\") pod \"octavia-operator-controller-manager-69f59f9d8-xbgdv\" (UID: \"5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92\") " pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.024767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s6f5\" (UniqueName: \"kubernetes.io/projected/1f44d497-1eb5-40cd-9026-30d623318705-kube-api-access-6s6f5\") pod \"neutron-operator-controller-manager-69b956fbf6-zc5vg\" (UID: \"1f44d497-1eb5-40cd-9026-30d623318705\") " pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.026796 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrjg\" (UniqueName: \"kubernetes.io/projected/b60c40ec-ea4e-445c-8561-859cd7cd94de-kube-api-access-pnrjg\") pod \"nova-operator-controller-manager-6c9b57c67-f58xh\" (UID: \"b60c40ec-ea4e-445c-8561-859cd7cd94de\") " pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.033999 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw4pf\" (UniqueName: \"kubernetes.io/projected/b71b7f35-8fba-4da1-83f9-4b7c08b15990-kube-api-access-cw4pf\") pod \"ovn-operator-controller-manager-54d485fd9-98gw2\" (UID: \"b71b7f35-8fba-4da1-83f9-4b7c08b15990\") " pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.048405 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kns95\" (UniqueName: \"kubernetes.io/projected/0443a624-6fd7-4b74-8e9c-7a1851459790-kube-api-access-kns95\") pod \"mariadb-operator-controller-manager-d6c9dc5bc-fhpnh\" (UID: \"0443a624-6fd7-4b74-8e9c-7a1851459790\") " pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.069513 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.072830 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.081417 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.081507 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6wd9w" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.083112 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.087702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.102326 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.106855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttm2j\" (UniqueName: \"kubernetes.io/projected/88ec1ec0-d81b-4b01-a299-7980c8fbb961-kube-api-access-ttm2j\") pod \"telemetry-operator-controller-manager-f589c7597-p9q6v\" (UID: \"88ec1ec0-d81b-4b01-a299-7980c8fbb961\") " pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.106922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vx6b\" (UniqueName: \"kubernetes.io/projected/48aee074-1139-426c-a04c-9f65ccb3ccde-kube-api-access-7vx6b\") pod \"placement-operator-controller-manager-66f6d6849b-j85xs\" (UID: \"48aee074-1139-426c-a04c-9f65ccb3ccde\") " pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.106993 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj8jt\" (UniqueName: \"kubernetes.io/projected/8140f9b7-f1fe-4151-a453-ff7990ee085b-kube-api-access-cj8jt\") pod \"swift-operator-controller-manager-76d5577b-pgq5t\" (UID: \"8140f9b7-f1fe-4151-a453-ff7990ee085b\") " pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.107018 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdb8\" (UniqueName: \"kubernetes.io/projected/fbdb6830-b324-47f5-8022-241051600f27-kube-api-access-vkdb8\") pod \"test-operator-controller-manager-6bb6dcddc-lcqsc\" (UID: \"fbdb6830-b324-47f5-8022-241051600f27\") " pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.107078 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be960fcf-8121-4145-8548-a1a46dc9f8bb-cert\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.107111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cg4m\" (UniqueName: \"kubernetes.io/projected/be960fcf-8121-4145-8548-a1a46dc9f8bb-kube-api-access-8cg4m\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: E1008 06:50:23.107816 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 06:50:23 crc kubenswrapper[4958]: E1008 06:50:23.107862 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be960fcf-8121-4145-8548-a1a46dc9f8bb-cert podName:be960fcf-8121-4145-8548-a1a46dc9f8bb nodeName:}" failed. No retries permitted until 2025-10-08 06:50:23.607848876 +0000 UTC m=+966.737541477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be960fcf-8121-4145-8548-a1a46dc9f8bb-cert") pod "openstack-baremetal-operator-controller-manager-6875c66686z426p" (UID: "be960fcf-8121-4145-8548-a1a46dc9f8bb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.129203 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vx6b\" (UniqueName: \"kubernetes.io/projected/48aee074-1139-426c-a04c-9f65ccb3ccde-kube-api-access-7vx6b\") pod \"placement-operator-controller-manager-66f6d6849b-j85xs\" (UID: \"48aee074-1139-426c-a04c-9f65ccb3ccde\") " pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.129260 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj8jt\" (UniqueName: \"kubernetes.io/projected/8140f9b7-f1fe-4151-a453-ff7990ee085b-kube-api-access-cj8jt\") pod \"swift-operator-controller-manager-76d5577b-pgq5t\" (UID: \"8140f9b7-f1fe-4151-a453-ff7990ee085b\") " pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.131137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cg4m\" (UniqueName: \"kubernetes.io/projected/be960fcf-8121-4145-8548-a1a46dc9f8bb-kube-api-access-8cg4m\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.140594 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttm2j\" (UniqueName: \"kubernetes.io/projected/88ec1ec0-d81b-4b01-a299-7980c8fbb961-kube-api-access-ttm2j\") pod \"telemetry-operator-controller-manager-f589c7597-p9q6v\" (UID: \"88ec1ec0-d81b-4b01-a299-7980c8fbb961\") " pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.168982 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.182035 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.182272 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.186401 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.196825 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.201421 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.201518 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.208375 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fjf9f" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.208574 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.209803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhl5k\" (UniqueName: \"kubernetes.io/projected/0ce4d96e-f3ed-48b7-b871-e8113fb727a2-kube-api-access-zhl5k\") pod \"watcher-operator-controller-manager-5d98cc5575-phmxn\" (UID: \"0ce4d96e-f3ed-48b7-b871-e8113fb727a2\") " pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.209857 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdb8\" (UniqueName: \"kubernetes.io/projected/fbdb6830-b324-47f5-8022-241051600f27-kube-api-access-vkdb8\") pod \"test-operator-controller-manager-6bb6dcddc-lcqsc\" (UID: \"fbdb6830-b324-47f5-8022-241051600f27\") " pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.245426 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdb8\" (UniqueName: \"kubernetes.io/projected/fbdb6830-b324-47f5-8022-241051600f27-kube-api-access-vkdb8\") pod \"test-operator-controller-manager-6bb6dcddc-lcqsc\" (UID: \"fbdb6830-b324-47f5-8022-241051600f27\") " pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.258932 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.259780 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.271986 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kcw4l" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.288023 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.300420 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.316502 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.323707 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc78m\" (UniqueName: \"kubernetes.io/projected/8407a2bd-3444-4331-840d-c9729358f57a-kube-api-access-sc78m\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8\" (UID: \"8407a2bd-3444-4331-840d-c9729358f57a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.323777 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a6179d-6e3a-4817-8403-7b6db9d933c8-cert\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.323804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhl5k\" (UniqueName: \"kubernetes.io/projected/0ce4d96e-f3ed-48b7-b871-e8113fb727a2-kube-api-access-zhl5k\") pod \"watcher-operator-controller-manager-5d98cc5575-phmxn\" (UID: \"0ce4d96e-f3ed-48b7-b871-e8113fb727a2\") " pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.323839 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlr7h\" (UniqueName: \"kubernetes.io/projected/06a6179d-6e3a-4817-8403-7b6db9d933c8-kube-api-access-wlr7h\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.329327 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.342188 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.346137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhl5k\" (UniqueName: \"kubernetes.io/projected/0ce4d96e-f3ed-48b7-b871-e8113fb727a2-kube-api-access-zhl5k\") pod \"watcher-operator-controller-manager-5d98cc5575-phmxn\" (UID: \"0ce4d96e-f3ed-48b7-b871-e8113fb727a2\") " pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:23 crc kubenswrapper[4958]: W1008 06:50:23.406030 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabbaa69c_2318_4087_a167_0bbe69928971.slice/crio-48317f88ef21f100d7496901d3b4b64644340c82f78f76c57b49baae0c7030d9 WatchSource:0}: Error finding container 48317f88ef21f100d7496901d3b4b64644340c82f78f76c57b49baae0c7030d9: Status 404 returned error can't find the container with id 48317f88ef21f100d7496901d3b4b64644340c82f78f76c57b49baae0c7030d9 Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.425291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlr7h\" (UniqueName: \"kubernetes.io/projected/06a6179d-6e3a-4817-8403-7b6db9d933c8-kube-api-access-wlr7h\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.425420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc78m\" (UniqueName: \"kubernetes.io/projected/8407a2bd-3444-4331-840d-c9729358f57a-kube-api-access-sc78m\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8\" (UID: \"8407a2bd-3444-4331-840d-c9729358f57a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.425527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a6179d-6e3a-4817-8403-7b6db9d933c8-cert\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: E1008 06:50:23.425846 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 08 06:50:23 crc kubenswrapper[4958]: E1008 06:50:23.425925 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06a6179d-6e3a-4817-8403-7b6db9d933c8-cert podName:06a6179d-6e3a-4817-8403-7b6db9d933c8 nodeName:}" failed. No retries permitted until 2025-10-08 06:50:23.92590724 +0000 UTC m=+967.055599841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06a6179d-6e3a-4817-8403-7b6db9d933c8-cert") pod "openstack-operator-controller-manager-6bfd56c677-mtx9b" (UID: "06a6179d-6e3a-4817-8403-7b6db9d933c8") : secret "webhook-server-cert" not found Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.456463 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.456480 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlr7h\" (UniqueName: \"kubernetes.io/projected/06a6179d-6e3a-4817-8403-7b6db9d933c8-kube-api-access-wlr7h\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.457515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc78m\" (UniqueName: \"kubernetes.io/projected/8407a2bd-3444-4331-840d-c9729358f57a-kube-api-access-sc78m\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8\" (UID: \"8407a2bd-3444-4331-840d-c9729358f57a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.482831 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.533894 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x"] Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.611531 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fad3625-6c81-4040-976f-a91d6fb91dde" path="/var/lib/kubelet/pods/5fad3625-6c81-4040-976f-a91d6fb91dde/volumes" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.612301 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l"] Oct 08 06:50:23 crc kubenswrapper[4958]: W1008 06:50:23.622714 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac064bd6_8d20_4224_b54f_e074bff95072.slice/crio-e7eb4985c43d65724720c1c4d5af57dc34b5d510f987304ea80f44ee13b3e03a WatchSource:0}: Error finding container e7eb4985c43d65724720c1c4d5af57dc34b5d510f987304ea80f44ee13b3e03a: Status 404 returned error can't find the container with id e7eb4985c43d65724720c1c4d5af57dc34b5d510f987304ea80f44ee13b3e03a Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.631492 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be960fcf-8121-4145-8548-a1a46dc9f8bb-cert\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.646537 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be960fcf-8121-4145-8548-a1a46dc9f8bb-cert\") pod \"openstack-baremetal-operator-controller-manager-6875c66686z426p\" (UID: \"be960fcf-8121-4145-8548-a1a46dc9f8bb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.738224 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd"] Oct 08 06:50:23 crc kubenswrapper[4958]: W1008 06:50:23.743281 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8890b7c7_8aba_485e_85db_ee154714c358.slice/crio-190049de6558d12bc6150c579035e61be857223d82c22b03f061fa4b094bd227 WatchSource:0}: Error finding container 190049de6558d12bc6150c579035e61be857223d82c22b03f061fa4b094bd227: Status 404 returned error can't find the container with id 190049de6558d12bc6150c579035e61be857223d82c22b03f061fa4b094bd227 Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.811826 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.937753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a6179d-6e3a-4817-8403-7b6db9d933c8-cert\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:23 crc kubenswrapper[4958]: I1008 06:50:23.944503 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a6179d-6e3a-4817-8403-7b6db9d933c8-cert\") pod \"openstack-operator-controller-manager-6bfd56c677-mtx9b\" (UID: \"06a6179d-6e3a-4817-8403-7b6db9d933c8\") " pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.056532 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.088871 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" event={"ID":"abbaa69c-2318-4087-a167-0bbe69928971","Type":"ContainerStarted","Data":"48317f88ef21f100d7496901d3b4b64644340c82f78f76c57b49baae0c7030d9"} Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.090915 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" event={"ID":"37de4c94-71ef-4563-915b-468370179903","Type":"ContainerStarted","Data":"55035b93f20e29323db87df131edf67e5fa2e6aced7ee28c40d0b3da49952830"} Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.092103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" event={"ID":"ac064bd6-8d20-4224-b54f-e074bff95072","Type":"ContainerStarted","Data":"e7eb4985c43d65724720c1c4d5af57dc34b5d510f987304ea80f44ee13b3e03a"} Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.092958 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" event={"ID":"8890b7c7-8aba-485e-85db-ee154714c358","Type":"ContainerStarted","Data":"190049de6558d12bc6150c579035e61be857223d82c22b03f061fa4b094bd227"} Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.195597 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.228396 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8"] Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.230554 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d81d52_ba79_4c62_95a7_f5a1e48b8dda.slice/crio-b71a9e42af3cb0a17f462e8a55e4bab085b48f09ec62d5422b5cf44a57c1df4e WatchSource:0}: Error finding container b71a9e42af3cb0a17f462e8a55e4bab085b48f09ec62d5422b5cf44a57c1df4e: Status 404 returned error can't find the container with id b71a9e42af3cb0a17f462e8a55e4bab085b48f09ec62d5422b5cf44a57c1df4e Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.233519 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.243878 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.257534 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.264612 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.269796 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh"] Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.282560 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84708733_8897_4752_8533_5463ce01d265.slice/crio-767a6eea793be4a6b22b257be7aaa7a15e0589a946c4d4578945af842c97ed9d WatchSource:0}: Error finding container 767a6eea793be4a6b22b257be7aaa7a15e0589a946c4d4578945af842c97ed9d: Status 404 returned error can't find the container with id 767a6eea793be4a6b22b257be7aaa7a15e0589a946c4d4578945af842c97ed9d Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.289791 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod351db139_bb78_4975_a6c1_ceb4904347f0.slice/crio-09eea5b1b19213286580cd8ae1c279e0ff331b2c84a20d1144ecee9bc0e4027b WatchSource:0}: Error finding container 09eea5b1b19213286580cd8ae1c279e0ff331b2c84a20d1144ecee9bc0e4027b: Status 404 returned error can't find the container with id 09eea5b1b19213286580cd8ae1c279e0ff331b2c84a20d1144ecee9bc0e4027b Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.294745 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0443a624_6fd7_4b74_8e9c_7a1851459790.slice/crio-eab6b8afca572b9d2c917d6b30c199f13b2ab0c0420510b253110b6a163202a8 WatchSource:0}: Error finding container eab6b8afca572b9d2c917d6b30c199f13b2ab0c0420510b253110b6a163202a8: Status 404 returned error can't find the container with id eab6b8afca572b9d2c917d6b30c199f13b2ab0c0420510b253110b6a163202a8 Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.362387 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.382970 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.401594 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.409009 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.413326 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v"] Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.418100 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60681278_f71a_4ec0_a572_e6c05783791c.slice/crio-2bed0f66830cd21a373f7b5cfa752cb7090347c8c9c091e3dd1274dd98dfe882 WatchSource:0}: Error finding container 2bed0f66830cd21a373f7b5cfa752cb7090347c8c9c091e3dd1274dd98dfe882: Status 404 returned error can't find the container with id 2bed0f66830cd21a373f7b5cfa752cb7090347c8c9c091e3dd1274dd98dfe882 Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.418700 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc"] Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.422751 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48aee074_1139_426c_a04c_9f65ccb3ccde.slice/crio-497c7a2502866b456c0475380b436a67dbd77476aa330a8593a9cf55408670cd WatchSource:0}: Error finding container 497c7a2502866b456c0475380b436a67dbd77476aa330a8593a9cf55408670cd: Status 404 returned error can't find the container with id 497c7a2502866b456c0475380b436a67dbd77476aa330a8593a9cf55408670cd Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.425592 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv"] Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.433292 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cj8jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-76d5577b-pgq5t_openstack-operators(8140f9b7-f1fe-4151-a453-ff7990ee085b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.433301 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttm2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-f589c7597-p9q6v_openstack-operators(88ec1ec0-d81b-4b01-a299-7980c8fbb961): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.436388 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t"] Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.452789 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8407a2bd_3444_4331_840d_c9729358f57a.slice/crio-5f2a8b1b028a760d8c9f5b188a1d9a7f0c259fdc673b015d3b74e5e0575d5a43 WatchSource:0}: Error finding container 5f2a8b1b028a760d8c9f5b188a1d9a7f0c259fdc673b015d3b74e5e0575d5a43: Status 404 returned error can't find the container with id 5f2a8b1b028a760d8c9f5b188a1d9a7f0c259fdc673b015d3b74e5e0575d5a43 Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.456216 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.474196 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8"] Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.480021 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p"] Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.488667 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnrjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6c9b57c67-f58xh_openstack-operators(b60c40ec-ea4e-445c-8561-859cd7cd94de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.488799 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhl5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5d98cc5575-phmxn_openstack-operators(0ce4d96e-f3ed-48b7-b871-e8113fb727a2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.490153 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:da5c3078d80878d66c616e6f8a0bb909f95d971cde2c612f96fded064113e182,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2b94k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f59f9d8-xbgdv_openstack-operators(5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.490185 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc78m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8_openstack-operators(8407a2bd-3444-4331-840d-c9729358f57a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.492115 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" podUID="8407a2bd-3444-4331-840d-c9729358f57a" Oct 08 06:50:24 crc kubenswrapper[4958]: W1008 06:50:24.503100 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe960fcf_8121_4145_8548_a1a46dc9f8bb.slice/crio-2969b529ddef284cef69f01b0d314e0f5bc0c526d7c9bd0040051b6021a53e8d WatchSource:0}: Error finding container 2969b529ddef284cef69f01b0d314e0f5bc0c526d7c9bd0040051b6021a53e8d: Status 404 returned error can't find the container with id 2969b529ddef284cef69f01b0d314e0f5bc0c526d7c9bd0040051b6021a53e8d Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.517963 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:03b4f3db4b373515f7e4095984b97197c05a14f87b2a0a525eb5d7be1d7bda66,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:6722a752fb7cbffbae811f6ad6567120fbd4ebbe8c38a83ec2df02850a3276bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:2115452234aedb505ed4efc6cd9b9a4ce3b9809aa7d0128d8fbeeee84dad1a69,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:50597a8eaa6c4383f357574dcab8358b698729797b4156d932985a08ab86b7cd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:cb4997d62c7b2534233a676cb92e19cf85dda07e2fb9fa642c28aab30489f69a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:1ccbf3f6cf24c9ee91bed71467491e22b8cb4b95bce90250f4174fae936b0fa1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:e91d58021b54c46883595ff66be65882de54abdb3be2ca53c4162b20d18b5f48,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:e7dcc3bf23d5e0393ac173e3c43d4ae85f4613a4fd16b3c147dc32ae491d49bf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:2a1a8b582c6e4cc31081bd8b0887acf45e31c1d14596c4e361d27d08fef0debf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:86daeb9c834bfcedb533086dff59a6b5b6e832b94ce2a9116337f8736bb80032,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:5d4fdf424fad33a3650163e9e7423f92e97de3305508c2b7c6435822e0313189,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:6d28de018f6e1672e775a75735e3bc16b63da41acd8fb5196ee0b06856c07133,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:c5fc9b72fc593bcf3b569c7ed24a256448eb1afab1504e668a3822e978be1306,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:88b99249f15470f359fb554f7f3a56974b743f4655e3f0c982c0260f75a67697,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:e861d66785047d39eb68d9bac23e3f57ac84d9bd95593502d9b3b913b99fd1a4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:b95f09bf3d259f9eacf3b63931977483f5c3c332f49b95ee8a69d8e3fb71d082,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:6fc7801c0d18d41b9f11484b1cdb342de9cebd93072ec2205dbe40945715184f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:d4d824b80cbed683543d9e8c7045ac97e080774f45a5067ccbca26404e067821,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:182ec75938d8d3fb7d8f916373368add24062fec90489aa57776a81d0b36ea20,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:9507ba5ab74cbae902e2dc07f89c7b3b5b76d8079e444365fe0eee6000fd7aaa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:17db080dcc4099f8a20aa0f238b6bca5c104672ae46743adeab9d1637725ecaa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:fd55cf3d73bfdc518419c9ba0b0cbef275140ae2d3bd0342a7310f81d57c2d78,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:d164a9bd383f50df69fc22e7422f4650cd5076c90ed19278fc0f04e54345a63d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:6beffe7d0bd75f9d1f495aeb7ab2334a2414af2c581d4833363df8441ed01018,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:2308c7b6c3d0aabbadfc9a06d84d67d2243f27fe8eed740ee96b1ce910203f62,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:9cf0ca292340f1f978603955ef682effbf24316d6e2376b1c89906d84c3f06d0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:58f678016d7f6c8fe579abe886fd138ef853642faa6766ca60639feac12d82ac,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:46f92909153aaf03a585374b77d103c536509747e3270558d9a533295c46a7c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:7fe367f51638c5c302fd3f8e66a31b09cb3b11519a7f72ef142b6c6fe8b91694,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:9ebf424d4107275a2e3f21f7a18ef257ff2f97c1298109ac7c802a5a4f4794f2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:4fcbe0d9a3c845708ecc32102ad4abbcbd947d87e5cf91f186de75b5d84ec681,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:58a4e9a4dea86635c93ce37a2bb3c60ece62b3d656f6ee6a8845347cbb3e90fd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:6f2b843bc9f4ceb1ee873972d69e6bae6e1dbd378b486995bc3697d8bcff6339,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:773daada6402d9cad089cdc809d6c0335456d057ac1a25441ab5d82add2f70f4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7323406a63fb3fdbb3eea4da0f7e8ed89c94c9bd0ad5ecd6c18fa4a4c2c550c4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:7ae82068011e2d2e5ddc88c943fd32ff4a11902793e7a1df729811b2e27122a0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:0c762c15d9d98d39cc9dc3d1f9a70f9188fef58d4e2f3b0c69c896cab8da5e48,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:febf65561eeef5b36b70d0d65ee83f6451e43ec97bfab4d826e14215da6ff19b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:b8aadfc3d547c5ef1e27fcb573d4760cf8c2f2271eefe1793c35a0d46b640837,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:ecc91fd5079ee6d0c6ae1b11e97da790e33864d0e1930e574f959da2bddfa59a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:2e981e93f99c929a3f04e5e41c8f645d44d390a9aeee3c5193cce7ec2edcbf3a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:1e5714637b6e1a24c2858fe6d9bbb3f00bc61d69ad74a657b1c23682bf4cb2b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:35b8dcf27dc3b67f3840fa0e693ff312f74f7e22c634dff206a5c4d0133c716c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:e109e4863e05e803dbfe04917756fd52231c560c65353170a2000be6cc2bb53d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:6df0bebd9318ce11624413249e7e9781311638f276f8877668d3b382fe90e62f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:56b75d97f4a48c8cf58b3a7c18c43618efb308bf0188124f6301142e61299b0c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:a51ed62767206067aa501142dbf01f20b3d65325d30faf1b4d6424d5b17dfba5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:592e3cd32d3cc97a69093ad905b449aa374ffbb1b2644b738bb6c1434476d1f6,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:9596452e283febbe08204d0ef0fd1992af3395d0969f7ac76663ed7c8be5b4d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:d61005a10bef1b37762a8a41e6755c1169241e36cc5f92886bca6f4f6b9c381a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:e6a4335bcbeed3cd3e73ac879f754e314761e4a417a67539ca88e96a79346328,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:97d88fc53421b699fc91983313d7beec4a0f177089e95bdf5ba15c3f521db9a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:5365e5c9c3ad2ede1b6945255b2cc6b009d642c39babdf25e0655282cfa646fe,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:5b55795d774e0ea160ff8a7fd491ed41cf2d93c7d821694abb3a879eaffcefeb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:26e955c46a6063eafcfeb79430bf3d9268dbe95687c00e63a624b3ec5a846f5a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:58939baa18ab09e2b24996c5f3665ae52274b781f661ea06a67c991e9a832d5a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:b8bff6857fec93c3c1521f1a8c23de21bcb86fc0f960972e81f6c3f95d4185be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:943eee724277e252795909137538a553ef5284c8103ad01b9be7b0138c66d14d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:d97b08fd421065c8c33a523973822ac468500cbe853069aa9214393fbda7a908,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:289dea3beea1cd4405895fc42e44372b35e4a941e31c59e102c333471a3ca9b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:9b19894fa67a81bf8ba4159b55b49f38877c670aeb97e2021c341cef2a9294e4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:ea164961ad30453ad0301c6b73364e1f1024f689634c88dd98265f9c7048e31d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:6f9f2ea45f0271f6da8eb05a5f74cf5ce6769479346f5c2f407ee6f31a9c7ff3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:59448516174fc3bab679b9a8dd62cb9a9d16b5734aadbeb98e960e3b7c79bd22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:adcdeb8ecd601fb03c3b0901d5b5111af2ca48f7dd443e22224db6daaf08f5d0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:2bf32d9b95899d7637dfe19d07cf1ecc9a06593984faff57a3c0dce060012edb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:7a452cd18b64d522e8a1e25bdcea543e9fe5f5b76e1c5e044c2b5334e06a326b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:6a46aa13aa359b8e782a22d67db42db02bbf2bb7e35df4b684ac1daeda38cde3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:f6824854bea6b2acbb00c34639799b4744818d4adbdd40e37dc5088f9ae18d58,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a66d2fdc21f25c690f02e643d2666dbe7df43a64cd55086ec33d6755e6d809b9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:8201aca97f4850c9fb9dab8f737b751943a79193e007bfb491592534dcdb5d3c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:051f9e9cff0ac42bf53401a1ccc9c36b518ff189bc0ab89ffa9b5b318cad15d2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:490c5afea446656fa5f404e46f69fb443dda2c3ea2ecb56798c0768f3d799993,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8cg4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6875c66686z426p_openstack-operators(be960fcf-8121-4145-8548-a1a46dc9f8bb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 08 06:50:24 crc kubenswrapper[4958]: I1008 06:50:24.674194 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b"] Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.751753 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" podUID="8140f9b7-f1fe-4151-a453-ff7990ee085b" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.763396 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" podUID="88ec1ec0-d81b-4b01-a299-7980c8fbb961" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.803881 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" podUID="b60c40ec-ea4e-445c-8561-859cd7cd94de" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.814551 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" podUID="0ce4d96e-f3ed-48b7-b871-e8113fb727a2" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.816034 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" podUID="5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92" Oct 08 06:50:24 crc kubenswrapper[4958]: E1008 06:50:24.842585 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" podUID="be960fcf-8121-4145-8548-a1a46dc9f8bb" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.106122 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" event={"ID":"351db139-bb78-4975-a6c1-ceb4904347f0","Type":"ContainerStarted","Data":"09eea5b1b19213286580cd8ae1c279e0ff331b2c84a20d1144ecee9bc0e4027b"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.109743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" event={"ID":"88ec1ec0-d81b-4b01-a299-7980c8fbb961","Type":"ContainerStarted","Data":"fd22aa26cee4a55164b31c1e5d2c4f621df300b23f7730d7342f8b87a493d998"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.110141 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" event={"ID":"88ec1ec0-d81b-4b01-a299-7980c8fbb961","Type":"ContainerStarted","Data":"da9eb2787174e570b2c70599b4b909a955aa47a759d8abfd594df7162d56a003"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.113180 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" podUID="88ec1ec0-d81b-4b01-a299-7980c8fbb961" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.115196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" event={"ID":"0ce4d96e-f3ed-48b7-b871-e8113fb727a2","Type":"ContainerStarted","Data":"8c8cef791fffa9706805f1b66e07285af186202e764acfae97134be018978661"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.115225 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" event={"ID":"0ce4d96e-f3ed-48b7-b871-e8113fb727a2","Type":"ContainerStarted","Data":"43302f7edd9974ccc5b0583c1e55e63815619fb36008afeab09ee3f40c7c2850"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.116574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" event={"ID":"48aee074-1139-426c-a04c-9f65ccb3ccde","Type":"ContainerStarted","Data":"497c7a2502866b456c0475380b436a67dbd77476aa330a8593a9cf55408670cd"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.117427 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" podUID="0ce4d96e-f3ed-48b7-b871-e8113fb727a2" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.119892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" event={"ID":"84708733-8897-4752-8533-5463ce01d265","Type":"ContainerStarted","Data":"767a6eea793be4a6b22b257be7aaa7a15e0589a946c4d4578945af842c97ed9d"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.121483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" event={"ID":"b60c40ec-ea4e-445c-8561-859cd7cd94de","Type":"ContainerStarted","Data":"39b6416fb9e258fa988ef06496f05c8c4bd0a7740c64ee4f7e89993d0874accf"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.121521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" event={"ID":"b60c40ec-ea4e-445c-8561-859cd7cd94de","Type":"ContainerStarted","Data":"02d4699b57e9fcb8a48db2fedd4b294b43480d01f3c7cab2ea26a8ae4e300927"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.124631 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" podUID="b60c40ec-ea4e-445c-8561-859cd7cd94de" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.125064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" event={"ID":"fbdb6830-b324-47f5-8022-241051600f27","Type":"ContainerStarted","Data":"b67d9f3cfa6487e3ca9c46b63b0e92ed8c7cb891a77621e2ff5d115274f9f6ed"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.127593 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" event={"ID":"8407a2bd-3444-4331-840d-c9729358f57a","Type":"ContainerStarted","Data":"5f2a8b1b028a760d8c9f5b188a1d9a7f0c259fdc673b015d3b74e5e0575d5a43"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.129355 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" podUID="8407a2bd-3444-4331-840d-c9729358f57a" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.129989 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" event={"ID":"8140f9b7-f1fe-4151-a453-ff7990ee085b","Type":"ContainerStarted","Data":"f515131fd4ae7310236d0cef065162414a603549ee010a779a280ef0032fc12e"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.130016 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" event={"ID":"8140f9b7-f1fe-4151-a453-ff7990ee085b","Type":"ContainerStarted","Data":"c488c84e8e251ad366c7949583a2ad31434317fd05655172007b5faebd1da6a7"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.145406 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" podUID="8140f9b7-f1fe-4151-a453-ff7990ee085b" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.158566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" event={"ID":"60681278-f71a-4ec0-a572-e6c05783791c","Type":"ContainerStarted","Data":"2bed0f66830cd21a373f7b5cfa752cb7090347c8c9c091e3dd1274dd98dfe882"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.184259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" event={"ID":"be960fcf-8121-4145-8548-a1a46dc9f8bb","Type":"ContainerStarted","Data":"4b37335d5f89e9008dcc65ce0b209bed9b6b7af391ed30e9579239eef327b5a5"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.184310 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" event={"ID":"be960fcf-8121-4145-8548-a1a46dc9f8bb","Type":"ContainerStarted","Data":"2969b529ddef284cef69f01b0d314e0f5bc0c526d7c9bd0040051b6021a53e8d"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.186029 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" podUID="be960fcf-8121-4145-8548-a1a46dc9f8bb" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.187089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" event={"ID":"06a6179d-6e3a-4817-8403-7b6db9d933c8","Type":"ContainerStarted","Data":"7a743398e42fed267199e6c09c728e5bf4907a6c525f5e026d3eb1f56d7f4663"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.187145 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" event={"ID":"06a6179d-6e3a-4817-8403-7b6db9d933c8","Type":"ContainerStarted","Data":"3d44ed04569630c980c4c94f9eaa53c05e8378b54c0d94d16d499f33783dc99e"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.187160 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" event={"ID":"06a6179d-6e3a-4817-8403-7b6db9d933c8","Type":"ContainerStarted","Data":"3659fdbfdcbeca3d01f05784a314cb20bc2056ec03db149abff22eebd129e750"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.187375 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.204842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" event={"ID":"28d81d52-ba79-4c62-95a7-f5a1e48b8dda","Type":"ContainerStarted","Data":"b71a9e42af3cb0a17f462e8a55e4bab085b48f09ec62d5422b5cf44a57c1df4e"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.212038 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" event={"ID":"b71b7f35-8fba-4da1-83f9-4b7c08b15990","Type":"ContainerStarted","Data":"ad7add80dbc9af35b2e417136bfeb39b23032c968b643bb791108ff0f9429d80"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.220608 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" event={"ID":"e503c39d-8eed-4db8-ad49-9a78f7c2bfa2","Type":"ContainerStarted","Data":"3fa9c794d8782624e0b58f70c28737984dcdec9f2a956aa1d0815ebfa1bd39d0"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.233454 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" event={"ID":"0443a624-6fd7-4b74-8e9c-7a1851459790","Type":"ContainerStarted","Data":"eab6b8afca572b9d2c917d6b30c199f13b2ab0c0420510b253110b6a163202a8"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.247207 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" event={"ID":"d6b4df20-5cc4-49a7-b124-ac88e068f9a0","Type":"ContainerStarted","Data":"01d1403b80be98f368f1c40bb6d12950eaed386793fc9575f41265e611e1a2bf"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.257241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" event={"ID":"1f44d497-1eb5-40cd-9026-30d623318705","Type":"ContainerStarted","Data":"994afd5381cfcb76a317d7bf818f6a876fe65164578bb86238c8034bb9dc7bbd"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.272664 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" event={"ID":"5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92","Type":"ContainerStarted","Data":"9cb74a8a8c6db83fde79af6d9907b5ef0297060f9594a2d89bf5eb55a406b1e6"} Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.272703 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" event={"ID":"5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92","Type":"ContainerStarted","Data":"bec6a7ff864dd92e37d09a784248f1adb6900210951dddcc21a391333098aad4"} Oct 08 06:50:25 crc kubenswrapper[4958]: E1008 06:50:25.277629 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:da5c3078d80878d66c616e6f8a0bb909f95d971cde2c612f96fded064113e182\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" podUID="5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92" Oct 08 06:50:25 crc kubenswrapper[4958]: I1008 06:50:25.297893 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" podStartSLOduration=2.297876513 podStartE2EDuration="2.297876513s" podCreationTimestamp="2025-10-08 06:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:50:25.294548653 +0000 UTC m=+968.424241254" watchObservedRunningTime="2025-10-08 06:50:25.297876513 +0000 UTC m=+968.427569114" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.281888 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:64f57b2b59dea2bd9fae91490c5bec2687131884a049e6579819d9f951b877c6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" podUID="0ce4d96e-f3ed-48b7-b871-e8113fb727a2" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.282503 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" podUID="8140f9b7-f1fe-4151-a453-ff7990ee085b" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.282549 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" podUID="8407a2bd-3444-4331-840d-c9729358f57a" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.282999 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" podUID="be960fcf-8121-4145-8548-a1a46dc9f8bb" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.283246 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" podUID="b60c40ec-ea4e-445c-8561-859cd7cd94de" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.284272 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:da5c3078d80878d66c616e6f8a0bb909f95d971cde2c612f96fded064113e182\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" podUID="5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92" Oct 08 06:50:26 crc kubenswrapper[4958]: E1008 06:50:26.287598 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:bf55026ba10b80e1e24733078bd204cef8766d21a305fd000707a1e3b30ff52e\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" podUID="88ec1ec0-d81b-4b01-a299-7980c8fbb961" Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.071254 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bfd56c677-mtx9b" Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.427193 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" event={"ID":"351db139-bb78-4975-a6c1-ceb4904347f0","Type":"ContainerStarted","Data":"dbc8811bd2538cf7f776e7801d059b6ccc1d4ec450b5dfe7e2f4225c8b0546fc"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.441388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" event={"ID":"abbaa69c-2318-4087-a167-0bbe69928971","Type":"ContainerStarted","Data":"a234e827123503d9f9d1b34d718c60a0c62aeedf037732f149d03bc30b92d415"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.458521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" event={"ID":"d6b4df20-5cc4-49a7-b124-ac88e068f9a0","Type":"ContainerStarted","Data":"ffa8d17b83cf221cc2d9714dc3bf5704ff16ba21e0ca3321485045336a4358cc"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.472114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" event={"ID":"1f44d497-1eb5-40cd-9026-30d623318705","Type":"ContainerStarted","Data":"0da956d90d1bdc76453d0db912e0cded68f06c9707edc11e534f4249ec9b8ab9"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.502545 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" event={"ID":"ac064bd6-8d20-4224-b54f-e074bff95072","Type":"ContainerStarted","Data":"02eb77f688c3355871a342689636ddbb29f1d08d33ea60a18853215f184860a3"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.502590 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" event={"ID":"ac064bd6-8d20-4224-b54f-e074bff95072","Type":"ContainerStarted","Data":"517f0f89521e36fda1ad70cf14e447e1019c346c7f598e125da65399b3f1622d"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.503561 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.517450 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" event={"ID":"fbdb6830-b324-47f5-8022-241051600f27","Type":"ContainerStarted","Data":"ff0082963c48c4fbc176f4059201c2c7ae76bc352e6336d33ab49edb81eea518"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.544125 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" event={"ID":"e503c39d-8eed-4db8-ad49-9a78f7c2bfa2","Type":"ContainerStarted","Data":"2590492b8caa5471df6e22bd74f98292005c6bcc6b532613090b0590078bd9cb"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.549030 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" podStartSLOduration=2.694681094 podStartE2EDuration="12.549014732s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:23.624458639 +0000 UTC m=+966.754151240" lastFinishedPulling="2025-10-08 06:50:33.478792237 +0000 UTC m=+976.608484878" observedRunningTime="2025-10-08 06:50:34.537158672 +0000 UTC m=+977.666851273" watchObservedRunningTime="2025-10-08 06:50:34.549014732 +0000 UTC m=+977.678707333" Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.563922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" event={"ID":"0443a624-6fd7-4b74-8e9c-7a1851459790","Type":"ContainerStarted","Data":"821052bf13faa4f5986397c3dceec2625e75063e6cdd9929c1bd302211ec85ce"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.596432 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" event={"ID":"37de4c94-71ef-4563-915b-468370179903","Type":"ContainerStarted","Data":"6e4afea074a8d1e6d19aa46afaef40c3e4d94ef1216e596630385d7127b22a64"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.620642 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" event={"ID":"60681278-f71a-4ec0-a572-e6c05783791c","Type":"ContainerStarted","Data":"c7aff0fc95e7268a5f8f9535b843e00c26b55a1c52fca1d4550a0ae3aa3c7609"} Oct 08 06:50:34 crc kubenswrapper[4958]: I1008 06:50:34.625477 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" event={"ID":"b71b7f35-8fba-4da1-83f9-4b7c08b15990","Type":"ContainerStarted","Data":"13a585dee4e7462686d1d41e36ed63c899e3fefedb0f17eb360f282b875ddc9a"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.666172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" event={"ID":"60681278-f71a-4ec0-a572-e6c05783791c","Type":"ContainerStarted","Data":"3d04768dacfdc95825bb5167d737b8b04e9b82fd1d1346a295080308f2f7085c"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.666955 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.672056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" event={"ID":"84708733-8897-4752-8533-5463ce01d265","Type":"ContainerStarted","Data":"bed9e93cc84ecb01d9848a026ca79ceb51083cff952df714aada51c29bd0c69e"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.672093 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" event={"ID":"84708733-8897-4752-8533-5463ce01d265","Type":"ContainerStarted","Data":"788ee41413e4ed17e1dc5bc7d5cfa717bb709429494cc0492e27124704bbf1de"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.672121 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.674227 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" event={"ID":"fbdb6830-b324-47f5-8022-241051600f27","Type":"ContainerStarted","Data":"21e0925c3402f526b059c5c78710b2e4f5791131efc2f70e5f8e5eebcfacc444"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.674295 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.675893 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" event={"ID":"351db139-bb78-4975-a6c1-ceb4904347f0","Type":"ContainerStarted","Data":"aac0b88dead2e158c25f6bb99c3f0a707730d99157ba36840288f31709d8d8c0"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.675982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.677149 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" event={"ID":"abbaa69c-2318-4087-a167-0bbe69928971","Type":"ContainerStarted","Data":"c0129a00363d8959aae7824e208046feb6158a23bf09a9f4b72ad274e56efb92"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.677535 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.678694 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" event={"ID":"37de4c94-71ef-4563-915b-468370179903","Type":"ContainerStarted","Data":"3553e3133ca1d9e08e9ae2ac8091a0b0f7c874e3a047aa4569be19317f26da03"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.679069 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.681202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" event={"ID":"e503c39d-8eed-4db8-ad49-9a78f7c2bfa2","Type":"ContainerStarted","Data":"93e6ae5a3a58b0d594e81d6e2b75a48b70531d138a500a7b1813049444fb0c2d"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.681324 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.682472 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" event={"ID":"8890b7c7-8aba-485e-85db-ee154714c358","Type":"ContainerStarted","Data":"be9041ce54a2ba63728ccc5bbdc8a71254c0064257689099bc88d3ecf739fb03"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.682493 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" event={"ID":"8890b7c7-8aba-485e-85db-ee154714c358","Type":"ContainerStarted","Data":"5da3bfdf91a73e657baac27eead3cf0dceb29a5e0b62361ddf76f32072760bf9"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.682847 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.685589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" event={"ID":"d6b4df20-5cc4-49a7-b124-ac88e068f9a0","Type":"ContainerStarted","Data":"0b39006d879098c49288ac113fc9cede95b29fe41556ed46943e058c973fb877"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.685637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" event={"ID":"1f44d497-1eb5-40cd-9026-30d623318705","Type":"ContainerStarted","Data":"c770ae33d18071b84460bbe520022d98b38b30f1fac3b0882fbfc2bde5b3ecbc"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.685656 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.685667 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.686505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" event={"ID":"0443a624-6fd7-4b74-8e9c-7a1851459790","Type":"ContainerStarted","Data":"f5a15d4be0ed9c5a21c73571bdf7f3abc532df1a9ab3257149ea97a62dd040f0"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.686901 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.688104 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" event={"ID":"28d81d52-ba79-4c62-95a7-f5a1e48b8dda","Type":"ContainerStarted","Data":"c293d90a3f4557feb2de36b837cc9aaf6184727083f1fe47c190336ab30b0e1c"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.688128 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" event={"ID":"28d81d52-ba79-4c62-95a7-f5a1e48b8dda","Type":"ContainerStarted","Data":"bee49e70610780a203284254c5ae671ee78f12101f640f985379d3bd3e7f702f"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.688514 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.690453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" event={"ID":"b71b7f35-8fba-4da1-83f9-4b7c08b15990","Type":"ContainerStarted","Data":"1093d8990d8cd64c91e5d165178104f1167dbe63a0f284cab3bed5bf4e8d9427"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.690610 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.692200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" event={"ID":"48aee074-1139-426c-a04c-9f65ccb3ccde","Type":"ContainerStarted","Data":"db17a9881d35e111e0a8eaf40292a46cbb27e1e8845b87b73a0b1ce79fdfe762"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.692882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" event={"ID":"48aee074-1139-426c-a04c-9f65ccb3ccde","Type":"ContainerStarted","Data":"bc0269bce6c9b5f58fceccf3e77f751a7e644e716fc4fa11655134f16ab6a48d"} Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.706244 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" podStartSLOduration=4.652963965 podStartE2EDuration="13.706228124s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.422904118 +0000 UTC m=+967.552596719" lastFinishedPulling="2025-10-08 06:50:33.476168237 +0000 UTC m=+976.605860878" observedRunningTime="2025-10-08 06:50:35.696762589 +0000 UTC m=+978.826455200" watchObservedRunningTime="2025-10-08 06:50:35.706228124 +0000 UTC m=+978.835920725" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.727709 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" podStartSLOduration=3.666336858 podStartE2EDuration="13.727693774s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:23.416539087 +0000 UTC m=+966.546231688" lastFinishedPulling="2025-10-08 06:50:33.477895963 +0000 UTC m=+976.607588604" observedRunningTime="2025-10-08 06:50:35.721479976 +0000 UTC m=+978.851172577" watchObservedRunningTime="2025-10-08 06:50:35.727693774 +0000 UTC m=+978.857386375" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.739684 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" podStartSLOduration=4.53970572 podStartE2EDuration="13.739668047s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.299610451 +0000 UTC m=+967.429303052" lastFinishedPulling="2025-10-08 06:50:33.499572768 +0000 UTC m=+976.629265379" observedRunningTime="2025-10-08 06:50:35.739530033 +0000 UTC m=+978.869222654" watchObservedRunningTime="2025-10-08 06:50:35.739668047 +0000 UTC m=+978.869360648" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.758635 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" podStartSLOduration=4.605894466 podStartE2EDuration="13.758619328s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.38035017 +0000 UTC m=+967.510042771" lastFinishedPulling="2025-10-08 06:50:33.533075022 +0000 UTC m=+976.662767633" observedRunningTime="2025-10-08 06:50:35.755559626 +0000 UTC m=+978.885252227" watchObservedRunningTime="2025-10-08 06:50:35.758619328 +0000 UTC m=+978.888311929" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.799376 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" podStartSLOduration=4.620199072 podStartE2EDuration="13.799358668s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.299612471 +0000 UTC m=+967.429305072" lastFinishedPulling="2025-10-08 06:50:33.478772017 +0000 UTC m=+976.608464668" observedRunningTime="2025-10-08 06:50:35.78608573 +0000 UTC m=+978.915778331" watchObservedRunningTime="2025-10-08 06:50:35.799358668 +0000 UTC m=+978.929051269" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.800885 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" podStartSLOduration=3.959386517 podStartE2EDuration="13.800881089s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:23.744433277 +0000 UTC m=+966.874125868" lastFinishedPulling="2025-10-08 06:50:33.585927809 +0000 UTC m=+976.715620440" observedRunningTime="2025-10-08 06:50:35.797725754 +0000 UTC m=+978.927418355" watchObservedRunningTime="2025-10-08 06:50:35.800881089 +0000 UTC m=+978.930573690" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.825861 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" podStartSLOduration=3.850354014 podStartE2EDuration="13.825847103s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:23.557732008 +0000 UTC m=+966.687424609" lastFinishedPulling="2025-10-08 06:50:33.533225067 +0000 UTC m=+976.662917698" observedRunningTime="2025-10-08 06:50:35.821750652 +0000 UTC m=+978.951443253" watchObservedRunningTime="2025-10-08 06:50:35.825847103 +0000 UTC m=+978.955539704" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.841040 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" podStartSLOduration=4.603583343 podStartE2EDuration="13.841029302s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.294699648 +0000 UTC m=+967.424392249" lastFinishedPulling="2025-10-08 06:50:33.532145567 +0000 UTC m=+976.661838208" observedRunningTime="2025-10-08 06:50:35.838896695 +0000 UTC m=+978.968589296" watchObservedRunningTime="2025-10-08 06:50:35.841029302 +0000 UTC m=+978.970721903" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.855576 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" podStartSLOduration=4.782264416 podStartE2EDuration="13.855559965s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.431919572 +0000 UTC m=+967.561612173" lastFinishedPulling="2025-10-08 06:50:33.505215091 +0000 UTC m=+976.634907722" observedRunningTime="2025-10-08 06:50:35.853788117 +0000 UTC m=+978.983480718" watchObservedRunningTime="2025-10-08 06:50:35.855559965 +0000 UTC m=+978.985252566" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.870811 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" podStartSLOduration=4.575235888 podStartE2EDuration="13.870794636s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.235245353 +0000 UTC m=+967.364937954" lastFinishedPulling="2025-10-08 06:50:33.530804081 +0000 UTC m=+976.660496702" observedRunningTime="2025-10-08 06:50:35.865460442 +0000 UTC m=+978.995153043" watchObservedRunningTime="2025-10-08 06:50:35.870794636 +0000 UTC m=+979.000487237" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.885822 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" podStartSLOduration=4.595777343 podStartE2EDuration="13.885807291s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.285842689 +0000 UTC m=+967.415535290" lastFinishedPulling="2025-10-08 06:50:33.575872607 +0000 UTC m=+976.705565238" observedRunningTime="2025-10-08 06:50:35.880893318 +0000 UTC m=+979.010585919" watchObservedRunningTime="2025-10-08 06:50:35.885807291 +0000 UTC m=+979.015499892" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.899412 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" podStartSLOduration=4.659255506 podStartE2EDuration="13.899392388s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.288531042 +0000 UTC m=+967.418223643" lastFinishedPulling="2025-10-08 06:50:33.528667914 +0000 UTC m=+976.658360525" observedRunningTime="2025-10-08 06:50:35.894092635 +0000 UTC m=+979.023785236" watchObservedRunningTime="2025-10-08 06:50:35.899392388 +0000 UTC m=+979.029084989" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.910626 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" podStartSLOduration=4.771137386 podStartE2EDuration="13.910612071s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.436592348 +0000 UTC m=+967.566284949" lastFinishedPulling="2025-10-08 06:50:33.576067023 +0000 UTC m=+976.705759634" observedRunningTime="2025-10-08 06:50:35.90873617 +0000 UTC m=+979.038428761" watchObservedRunningTime="2025-10-08 06:50:35.910612071 +0000 UTC m=+979.040304672" Oct 08 06:50:35 crc kubenswrapper[4958]: I1008 06:50:35.925362 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" podStartSLOduration=4.682606666 podStartE2EDuration="13.925345558s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.289405165 +0000 UTC m=+967.419097766" lastFinishedPulling="2025-10-08 06:50:33.532144037 +0000 UTC m=+976.661836658" observedRunningTime="2025-10-08 06:50:35.92209073 +0000 UTC m=+979.051783321" watchObservedRunningTime="2025-10-08 06:50:35.925345558 +0000 UTC m=+979.055038159" Oct 08 06:50:36 crc kubenswrapper[4958]: I1008 06:50:36.702816 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:39 crc kubenswrapper[4958]: I1008 06:50:39.728892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" event={"ID":"5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92","Type":"ContainerStarted","Data":"3436eabebd63df939cd9ac7872cf4af4a16e2e3f91b93d3ed093c86efd612f1b"} Oct 08 06:50:39 crc kubenswrapper[4958]: I1008 06:50:39.729602 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:39 crc kubenswrapper[4958]: I1008 06:50:39.750093 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" podStartSLOduration=3.002632465 podStartE2EDuration="17.750075855s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.490067121 +0000 UTC m=+967.619759722" lastFinishedPulling="2025-10-08 06:50:39.237510471 +0000 UTC m=+982.367203112" observedRunningTime="2025-10-08 06:50:39.748570804 +0000 UTC m=+982.878263505" watchObservedRunningTime="2025-10-08 06:50:39.750075855 +0000 UTC m=+982.879768456" Oct 08 06:50:41 crc kubenswrapper[4958]: I1008 06:50:41.753179 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" event={"ID":"88ec1ec0-d81b-4b01-a299-7980c8fbb961","Type":"ContainerStarted","Data":"625d2fb6e18e2466475b533d003cf4d3078f5c91ce550dbf48c6953a7d27c706"} Oct 08 06:50:41 crc kubenswrapper[4958]: I1008 06:50:41.753606 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:41 crc kubenswrapper[4958]: I1008 06:50:41.757164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" event={"ID":"8140f9b7-f1fe-4151-a453-ff7990ee085b","Type":"ContainerStarted","Data":"e43d80bb0482c5624171122044f6b32fb82c8e300b11d9661ddcc58b2fd500f8"} Oct 08 06:50:41 crc kubenswrapper[4958]: I1008 06:50:41.757366 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:41 crc kubenswrapper[4958]: I1008 06:50:41.771593 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" podStartSLOduration=2.90171524 podStartE2EDuration="19.771572432s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.433129304 +0000 UTC m=+967.562821905" lastFinishedPulling="2025-10-08 06:50:41.302986496 +0000 UTC m=+984.432679097" observedRunningTime="2025-10-08 06:50:41.767326728 +0000 UTC m=+984.897019329" watchObservedRunningTime="2025-10-08 06:50:41.771572432 +0000 UTC m=+984.901265043" Oct 08 06:50:41 crc kubenswrapper[4958]: I1008 06:50:41.783400 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" podStartSLOduration=2.918855803 podStartE2EDuration="19.783380691s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.433092963 +0000 UTC m=+967.562785554" lastFinishedPulling="2025-10-08 06:50:41.297617841 +0000 UTC m=+984.427310442" observedRunningTime="2025-10-08 06:50:41.783104284 +0000 UTC m=+984.912796895" watchObservedRunningTime="2025-10-08 06:50:41.783380691 +0000 UTC m=+984.913073292" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.774190 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" event={"ID":"b60c40ec-ea4e-445c-8561-859cd7cd94de","Type":"ContainerStarted","Data":"3ecbba8b37c194ba5e1f65dcf90986f37c94032d14f17c6da57105b81ac49e46"} Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.775765 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.793286 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" podStartSLOduration=3.961968907 podStartE2EDuration="20.793252847s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.48853306 +0000 UTC m=+967.618225661" lastFinishedPulling="2025-10-08 06:50:41.319817 +0000 UTC m=+984.449509601" observedRunningTime="2025-10-08 06:50:42.791985273 +0000 UTC m=+985.921677874" watchObservedRunningTime="2025-10-08 06:50:42.793252847 +0000 UTC m=+985.922945508" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.795116 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64f56ff694-6fjfx" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.812555 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-zh97x" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.829411 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-f8h2l" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.860685 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-fd648f65-7l7c8" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.885137 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-7ccfc8cf49-wmvbd" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.906260 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b477879bc-l8tj8" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.941549 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-6pggm" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.969501 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5467f8988c-wklfs" Oct 08 06:50:42 crc kubenswrapper[4958]: I1008 06:50:42.989848 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5b84cc7657-v9mbt" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.086897 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-qftvm" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.096308 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-fhpnh" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.112314 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-zc5vg" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.190705 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54d485fd9-98gw2" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.204023 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-j85xs" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.346632 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-lcqsc" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.782249 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" event={"ID":"8407a2bd-3444-4331-840d-c9729358f57a","Type":"ContainerStarted","Data":"aed2ac657983d590f2da87c9c8fb41839b651f2fa12ca8de3fe7ba813cccfc39"} Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.783840 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" event={"ID":"be960fcf-8121-4145-8548-a1a46dc9f8bb","Type":"ContainerStarted","Data":"02910656aa816822125cb3535bbe863d1f3d5d188e7c6471cb284dc40357d7e3"} Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.784266 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.785378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" event={"ID":"0ce4d96e-f3ed-48b7-b871-e8113fb727a2","Type":"ContainerStarted","Data":"fd03ade981e65e5619443d65f6ce08b43db02715e685d570e8b36ffc791885ba"} Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.805787 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8" podStartSLOduration=1.998350208 podStartE2EDuration="20.805772173s" podCreationTimestamp="2025-10-08 06:50:23 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.490085751 +0000 UTC m=+967.619778352" lastFinishedPulling="2025-10-08 06:50:43.297507706 +0000 UTC m=+986.427200317" observedRunningTime="2025-10-08 06:50:43.801921329 +0000 UTC m=+986.931613940" watchObservedRunningTime="2025-10-08 06:50:43.805772173 +0000 UTC m=+986.935464794" Oct 08 06:50:43 crc kubenswrapper[4958]: I1008 06:50:43.841099 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" podStartSLOduration=3.070692491 podStartE2EDuration="21.841084476s" podCreationTimestamp="2025-10-08 06:50:22 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.515852817 +0000 UTC m=+967.645545418" lastFinishedPulling="2025-10-08 06:50:43.286244802 +0000 UTC m=+986.415937403" observedRunningTime="2025-10-08 06:50:43.836159853 +0000 UTC m=+986.965852454" watchObservedRunningTime="2025-10-08 06:50:43.841084476 +0000 UTC m=+986.970777077" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.173620 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-f58xh" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.185427 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-xbgdv" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.197136 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" podStartSLOduration=11.403664399 podStartE2EDuration="30.197104777s" podCreationTimestamp="2025-10-08 06:50:23 +0000 UTC" firstStartedPulling="2025-10-08 06:50:24.488742285 +0000 UTC m=+967.618434886" lastFinishedPulling="2025-10-08 06:50:43.282182643 +0000 UTC m=+986.411875264" observedRunningTime="2025-10-08 06:50:43.860338626 +0000 UTC m=+986.990031227" watchObservedRunningTime="2025-10-08 06:50:53.197104777 +0000 UTC m=+996.326797418" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.305118 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pgq5t" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.321893 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-p9q6v" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.456784 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.459597 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-phmxn" Oct 08 06:50:53 crc kubenswrapper[4958]: I1008 06:50:53.819056 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6875c66686z426p" Oct 08 06:51:06 crc kubenswrapper[4958]: I1008 06:51:06.844601 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:51:06 crc kubenswrapper[4958]: I1008 06:51:06.845245 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.839791 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwpgx"] Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.841715 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.845750 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-c6kqg" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.846003 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.846319 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.846539 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.846714 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwpgx"] Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.879372 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-zxfj5"] Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.881313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.883055 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.897567 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-zxfj5"] Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.942689 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-dns-svc\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.942752 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-config\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.942809 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4k7q\" (UniqueName: \"kubernetes.io/projected/16369a68-894c-4126-a90a-b41e2e66a590-kube-api-access-f4k7q\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.942836 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-config\") pod \"dnsmasq-dns-7bfcb9d745-wwpgx\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:09 crc kubenswrapper[4958]: I1008 06:51:09.942894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8md7g\" (UniqueName: \"kubernetes.io/projected/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-kube-api-access-8md7g\") pod \"dnsmasq-dns-7bfcb9d745-wwpgx\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.043845 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4k7q\" (UniqueName: \"kubernetes.io/projected/16369a68-894c-4126-a90a-b41e2e66a590-kube-api-access-f4k7q\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.043897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-config\") pod \"dnsmasq-dns-7bfcb9d745-wwpgx\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.044777 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-config\") pod \"dnsmasq-dns-7bfcb9d745-wwpgx\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.045651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8md7g\" (UniqueName: \"kubernetes.io/projected/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-kube-api-access-8md7g\") pod \"dnsmasq-dns-7bfcb9d745-wwpgx\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.045686 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-dns-svc\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.045716 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-config\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.046333 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-config\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.046424 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-dns-svc\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.066543 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4k7q\" (UniqueName: \"kubernetes.io/projected/16369a68-894c-4126-a90a-b41e2e66a590-kube-api-access-f4k7q\") pod \"dnsmasq-dns-758b79db4c-zxfj5\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.069936 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8md7g\" (UniqueName: \"kubernetes.io/projected/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-kube-api-access-8md7g\") pod \"dnsmasq-dns-7bfcb9d745-wwpgx\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.160510 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.193864 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.429797 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwpgx"] Oct 08 06:51:10 crc kubenswrapper[4958]: I1008 06:51:10.487634 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-zxfj5"] Oct 08 06:51:10 crc kubenswrapper[4958]: W1008 06:51:10.492412 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16369a68_894c_4126_a90a_b41e2e66a590.slice/crio-c0c716c14d5d86b15a829472468e16ba543ef6bab032b9ed2bc2ad5d817650ce WatchSource:0}: Error finding container c0c716c14d5d86b15a829472468e16ba543ef6bab032b9ed2bc2ad5d817650ce: Status 404 returned error can't find the container with id c0c716c14d5d86b15a829472468e16ba543ef6bab032b9ed2bc2ad5d817650ce Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.126174 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" event={"ID":"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2","Type":"ContainerStarted","Data":"67fab018229103c432fe6c75790d0739b01b7545f13f72cca8ac81171da2e198"} Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.147973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" event={"ID":"16369a68-894c-4126-a90a-b41e2e66a590","Type":"ContainerStarted","Data":"c0c716c14d5d86b15a829472468e16ba543ef6bab032b9ed2bc2ad5d817650ce"} Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.732267 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-zxfj5"] Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.764054 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-rt64c"] Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.766106 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.779200 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-rt64c"] Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.877794 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-dns-svc\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.877875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfrh\" (UniqueName: \"kubernetes.io/projected/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-kube-api-access-kzfrh\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.877894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-config\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.979727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-dns-svc\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.979812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfrh\" (UniqueName: \"kubernetes.io/projected/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-kube-api-access-kzfrh\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.979831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-config\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.980864 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-dns-svc\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.981276 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-config\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:11 crc kubenswrapper[4958]: I1008 06:51:11.997028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfrh\" (UniqueName: \"kubernetes.io/projected/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-kube-api-access-kzfrh\") pod \"dnsmasq-dns-8575fc99d7-rt64c\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.095041 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.357763 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-rt64c"] Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.593902 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwpgx"] Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.626814 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77597f887-jzfqg"] Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.628145 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.638305 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-jzfqg"] Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.790739 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-config\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.790865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9r9f\" (UniqueName: \"kubernetes.io/projected/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-kube-api-access-j9r9f\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.790911 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-dns-svc\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.892158 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-config\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.892284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9r9f\" (UniqueName: \"kubernetes.io/projected/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-kube-api-access-j9r9f\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.892351 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-dns-svc\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.893638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-dns-svc\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.893833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-config\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.894089 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.896458 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.898801 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.899072 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-q9mjx" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.899174 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.899293 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.899384 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.899511 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.899624 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.903340 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.925797 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9r9f\" (UniqueName: \"kubernetes.io/projected/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-kube-api-access-j9r9f\") pod \"dnsmasq-dns-77597f887-jzfqg\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.957097 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993811 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993854 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/442b1534-27bc-4d6d-be46-1ea5689c290f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993876 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993892 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8jlm\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-kube-api-access-m8jlm\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993928 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/442b1534-27bc-4d6d-be46-1ea5689c290f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.993983 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.994027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.994049 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.994065 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:12 crc kubenswrapper[4958]: I1008 06:51:12.994086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100580 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100672 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/442b1534-27bc-4d6d-be46-1ea5689c290f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100705 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8jlm\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-kube-api-access-m8jlm\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/442b1534-27bc-4d6d-be46-1ea5689c290f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100795 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100819 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.100836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.101253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.101469 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.102279 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.104007 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.106271 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.106796 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.106991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.107268 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.114198 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/442b1534-27bc-4d6d-be46-1ea5689c290f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.119471 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/442b1534-27bc-4d6d-be46-1ea5689c290f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.122323 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8jlm\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-kube-api-access-m8jlm\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.144616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.184198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" event={"ID":"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73","Type":"ContainerStarted","Data":"7dbd3df062b83be66c3ae4b66733e95f1f02c586f363c007b40bbd2ec219c44e"} Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.258720 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.444052 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-jzfqg"] Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.750867 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.752720 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763096 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763269 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763394 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763493 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763592 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763696 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.763798 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6sn8m" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.804396 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.838616 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:51:13 crc kubenswrapper[4958]: W1008 06:51:13.869220 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod442b1534_27bc_4d6d_be46_1ea5689c290f.slice/crio-288a82c7843344062ba7b3663ffaa6d5a1cd272a549c93d1bcd79ec4cf839b5c WatchSource:0}: Error finding container 288a82c7843344062ba7b3663ffaa6d5a1cd272a549c93d1bcd79ec4cf839b5c: Status 404 returned error can't find the container with id 288a82c7843344062ba7b3663ffaa6d5a1cd272a549c93d1bcd79ec4cf839b5c Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.913885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914030 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914111 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f931d71-9f8f-4755-a793-ca326e423199-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914137 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914227 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f931d71-9f8f-4755-a793-ca326e423199-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914262 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914361 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914389 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914438 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlr5g\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-kube-api-access-xlr5g\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:13 crc kubenswrapper[4958]: I1008 06:51:13.914460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.016586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017039 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlr5g\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-kube-api-access-xlr5g\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017065 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017147 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017178 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017194 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f931d71-9f8f-4755-a793-ca326e423199-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017238 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f931d71-9f8f-4755-a793-ca326e423199-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017270 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.017290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.018104 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.018170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.018588 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.018846 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.020211 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.020243 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.024297 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f931d71-9f8f-4755-a793-ca326e423199-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.028592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f931d71-9f8f-4755-a793-ca326e423199-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.033499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.034509 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlr5g\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-kube-api-access-xlr5g\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.038802 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.050815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.084461 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.201495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-jzfqg" event={"ID":"cbc98b67-c003-4e9e-ad52-f4a5b66106fd","Type":"ContainerStarted","Data":"41f2246261b392655aff755d6192172e16cee2086ad82466e06eefbd48afc837"} Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.207530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"442b1534-27bc-4d6d-be46-1ea5689c290f","Type":"ContainerStarted","Data":"288a82c7843344062ba7b3663ffaa6d5a1cd272a549c93d1bcd79ec4cf839b5c"} Oct 08 06:51:14 crc kubenswrapper[4958]: I1008 06:51:14.676873 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.533149 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.541479 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.546275 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.546852 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.547110 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.552018 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-j8ffx" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.552295 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.552559 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.567894 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.630739 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.632168 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.635132 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ncgkh" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.635569 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.635878 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.635879 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.651169 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.665851 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-secrets\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.665931 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666018 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666106 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-kolla-config\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7sfv\" (UniqueName: \"kubernetes.io/projected/5afba053-ce3d-4e27-a16f-35dff8f0407c-kube-api-access-j7sfv\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666388 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666444 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-default\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.666465 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.767716 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768127 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-secrets\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768187 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768218 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768257 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768299 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768348 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768381 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-kolla-config\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768431 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7sfv\" (UniqueName: \"kubernetes.io/projected/5afba053-ce3d-4e27-a16f-35dff8f0407c-kube-api-access-j7sfv\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768450 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768514 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768560 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768583 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768643 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-default\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768664 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hccq\" (UniqueName: \"kubernetes.io/projected/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kube-api-access-6hccq\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.768683 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.769420 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.769696 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-kolla-config\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.769899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.769938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.770907 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-default\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.774591 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-secrets\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.774865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.792058 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7sfv\" (UniqueName: \"kubernetes.io/projected/5afba053-ce3d-4e27-a16f-35dff8f0407c-kube-api-access-j7sfv\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.800441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.805693 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.868794 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869737 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869761 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hccq\" (UniqueName: \"kubernetes.io/projected/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kube-api-access-6hccq\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869826 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869850 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869877 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.869911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.870922 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.871478 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.871636 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.871965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.872131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.874573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.875400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.884130 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.886356 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hccq\" (UniqueName: \"kubernetes.io/projected/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kube-api-access-6hccq\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.894230 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:16 crc kubenswrapper[4958]: I1008 06:51:16.964385 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.732448 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.733433 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.740811 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.756017 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.766286 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.770597 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-z9k8x" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.784684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-memcached-tls-certs\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.784723 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54s4p\" (UniqueName: \"kubernetes.io/projected/312426b0-8fb6-48ad-ba99-79b87cfcac38-kube-api-access-54s4p\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.784886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-kolla-config\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.784915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-config-data\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.784978 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-combined-ca-bundle\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.886384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-memcached-tls-certs\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.886458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54s4p\" (UniqueName: \"kubernetes.io/projected/312426b0-8fb6-48ad-ba99-79b87cfcac38-kube-api-access-54s4p\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.886538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-kolla-config\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.886567 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-config-data\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.886597 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-combined-ca-bundle\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.887405 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-kolla-config\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.887570 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-config-data\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.889580 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-memcached-tls-certs\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.911854 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-combined-ca-bundle\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:17 crc kubenswrapper[4958]: I1008 06:51:17.916670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54s4p\" (UniqueName: \"kubernetes.io/projected/312426b0-8fb6-48ad-ba99-79b87cfcac38-kube-api-access-54s4p\") pod \"memcached-0\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " pod="openstack/memcached-0" Oct 08 06:51:18 crc kubenswrapper[4958]: I1008 06:51:18.085958 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 06:51:18 crc kubenswrapper[4958]: I1008 06:51:18.819402 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.256997 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f931d71-9f8f-4755-a793-ca326e423199","Type":"ContainerStarted","Data":"639c33de2cdb91ba5be69e928dd48a70c371510310babcd2bf0a7881554fc0c0"} Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.655873 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.656873 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.661661 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g5f77" Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.672747 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.714922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk922\" (UniqueName: \"kubernetes.io/projected/73dcc7e5-2f70-403a-abe2-2c67260864eb-kube-api-access-zk922\") pod \"kube-state-metrics-0\" (UID: \"73dcc7e5-2f70-403a-abe2-2c67260864eb\") " pod="openstack/kube-state-metrics-0" Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.816135 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk922\" (UniqueName: \"kubernetes.io/projected/73dcc7e5-2f70-403a-abe2-2c67260864eb-kube-api-access-zk922\") pod \"kube-state-metrics-0\" (UID: \"73dcc7e5-2f70-403a-abe2-2c67260864eb\") " pod="openstack/kube-state-metrics-0" Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.831675 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk922\" (UniqueName: \"kubernetes.io/projected/73dcc7e5-2f70-403a-abe2-2c67260864eb-kube-api-access-zk922\") pod \"kube-state-metrics-0\" (UID: \"73dcc7e5-2f70-403a-abe2-2c67260864eb\") " pod="openstack/kube-state-metrics-0" Oct 08 06:51:19 crc kubenswrapper[4958]: I1008 06:51:19.986622 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.116288 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-spvp9"] Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.117724 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.119217 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fkjcf" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.119421 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.125397 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.139040 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spvp9"] Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.146929 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-p4rkp"] Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.148536 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.161942 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p4rkp"] Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-log\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177347 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-log-ovn\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-run\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27dbn\" (UniqueName: \"kubernetes.io/projected/bffcae1a-1024-41fe-95f7-c20090e1a4fa-kube-api-access-27dbn\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177517 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6rxz\" (UniqueName: \"kubernetes.io/projected/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-kube-api-access-v6rxz\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177576 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-etc-ovs\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177593 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffcae1a-1024-41fe-95f7-c20090e1a4fa-scripts\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177667 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-scripts\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177754 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-combined-ca-bundle\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177768 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-lib\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-ovn-controller-tls-certs\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.177838 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run-ovn\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.272753 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.274472 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278056 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278691 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run-ovn\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278725 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-log\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278752 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-log-ovn\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278782 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-run\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278803 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27dbn\" (UniqueName: \"kubernetes.io/projected/bffcae1a-1024-41fe-95f7-c20090e1a4fa-kube-api-access-27dbn\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6rxz\" (UniqueName: \"kubernetes.io/projected/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-kube-api-access-v6rxz\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-etc-ovs\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278854 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffcae1a-1024-41fe-95f7-c20090e1a4fa-scripts\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278894 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-scripts\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-combined-ca-bundle\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278937 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-lib\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.278966 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-ovn-controller-tls-certs\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.279661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run-ovn\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.279711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.279795 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-log\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.279868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-etc-ovs\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.279876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-log-ovn\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.279913 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-run\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.282180 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffcae1a-1024-41fe-95f7-c20090e1a4fa-scripts\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.282429 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-lib\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.289104 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-ovn-controller-tls-certs\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.292633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-scripts\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.299513 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-combined-ca-bundle\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.299712 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.300340 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.301252 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bs89f" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.301535 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.310905 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27dbn\" (UniqueName: \"kubernetes.io/projected/bffcae1a-1024-41fe-95f7-c20090e1a4fa-kube-api-access-27dbn\") pod \"ovn-controller-ovs-p4rkp\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.316122 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.344071 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6rxz\" (UniqueName: \"kubernetes.io/projected/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-kube-api-access-v6rxz\") pod \"ovn-controller-spvp9\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379693 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-config\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379769 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6269a952-e10d-442f-8d9f-135e16244e83-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379850 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379868 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379887 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zpp7\" (UniqueName: \"kubernetes.io/projected/6269a952-e10d-442f-8d9f-135e16244e83-kube-api-access-5zpp7\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.379909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.437855 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.464708 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.480972 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481056 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-config\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481155 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6269a952-e10d-442f-8d9f-135e16244e83-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481265 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zpp7\" (UniqueName: \"kubernetes.io/projected/6269a952-e10d-442f-8d9f-135e16244e83-kube-api-access-5zpp7\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.481887 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6269a952-e10d-442f-8d9f-135e16244e83-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.482296 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.482917 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-config\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.484096 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.485077 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.485719 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.491555 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.500402 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zpp7\" (UniqueName: \"kubernetes.io/projected/6269a952-e10d-442f-8d9f-135e16244e83-kube-api-access-5zpp7\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.514612 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:24 crc kubenswrapper[4958]: I1008 06:51:24.677286 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.795221 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.797849 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.803474 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.804208 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.804540 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.804877 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mqz5m" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.806560 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.923263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.923332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-config\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.923368 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.923624 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.923791 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.923904 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzkb\" (UniqueName: \"kubernetes.io/projected/f285e309-c3e6-42ce-9f95-8302079cfd71-kube-api-access-vbzkb\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.924017 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:26 crc kubenswrapper[4958]: I1008 06:51:26.924219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026528 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026627 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-config\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026659 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026822 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.026879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzkb\" (UniqueName: \"kubernetes.io/projected/f285e309-c3e6-42ce-9f95-8302079cfd71-kube-api-access-vbzkb\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.027902 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-config\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.028250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.029538 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.030528 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.033813 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.034866 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.049326 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.051766 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzkb\" (UniqueName: \"kubernetes.io/projected/f285e309-c3e6-42ce-9f95-8302079cfd71-kube-api-access-vbzkb\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.066776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:27 crc kubenswrapper[4958]: I1008 06:51:27.135553 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.262046 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.262541 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4k7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-758b79db4c-zxfj5_openstack(16369a68-894c-4126-a90a-b41e2e66a590): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.264123 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" podUID="16369a68-894c-4126-a90a-b41e2e66a590" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.272157 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.272327 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzfrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8575fc99d7-rt64c_openstack(8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.273517 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.287572 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.287712 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8md7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bfcb9d745-wwpgx_openstack(24788c52-cff2-4a7e-9b4f-fab66b1ddaa2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.289832 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" podUID="24788c52-cff2-4a7e-9b4f-fab66b1ddaa2" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.309852 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.310046 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-77597f887-jzfqg_openstack(cbc98b67-c003-4e9e-ad52-f4a5b66106fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.312023 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-77597f887-jzfqg" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.385895 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df\\\"\"" pod="openstack/dnsmasq-dns-77597f887-jzfqg" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" Oct 08 06:51:29 crc kubenswrapper[4958]: E1008 06:51:29.403995 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df\\\"\"" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.761987 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.787504 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.876089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8md7g\" (UniqueName: \"kubernetes.io/projected/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-kube-api-access-8md7g\") pod \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.876310 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-config\") pod \"16369a68-894c-4126-a90a-b41e2e66a590\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.876442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4k7q\" (UniqueName: \"kubernetes.io/projected/16369a68-894c-4126-a90a-b41e2e66a590-kube-api-access-f4k7q\") pod \"16369a68-894c-4126-a90a-b41e2e66a590\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.876480 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-config\") pod \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\" (UID: \"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2\") " Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.876531 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-dns-svc\") pod \"16369a68-894c-4126-a90a-b41e2e66a590\" (UID: \"16369a68-894c-4126-a90a-b41e2e66a590\") " Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.877827 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16369a68-894c-4126-a90a-b41e2e66a590" (UID: "16369a68-894c-4126-a90a-b41e2e66a590"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.878767 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-config" (OuterVolumeSpecName: "config") pod "24788c52-cff2-4a7e-9b4f-fab66b1ddaa2" (UID: "24788c52-cff2-4a7e-9b4f-fab66b1ddaa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.878787 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-config" (OuterVolumeSpecName: "config") pod "16369a68-894c-4126-a90a-b41e2e66a590" (UID: "16369a68-894c-4126-a90a-b41e2e66a590"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.882726 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16369a68-894c-4126-a90a-b41e2e66a590-kube-api-access-f4k7q" (OuterVolumeSpecName: "kube-api-access-f4k7q") pod "16369a68-894c-4126-a90a-b41e2e66a590" (UID: "16369a68-894c-4126-a90a-b41e2e66a590"). InnerVolumeSpecName "kube-api-access-f4k7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.887758 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-kube-api-access-8md7g" (OuterVolumeSpecName: "kube-api-access-8md7g") pod "24788c52-cff2-4a7e-9b4f-fab66b1ddaa2" (UID: "24788c52-cff2-4a7e-9b4f-fab66b1ddaa2"). InnerVolumeSpecName "kube-api-access-8md7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.944502 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.959458 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.978421 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.978471 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8md7g\" (UniqueName: \"kubernetes.io/projected/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-kube-api-access-8md7g\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.978483 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16369a68-894c-4126-a90a-b41e2e66a590-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.978494 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4k7q\" (UniqueName: \"kubernetes.io/projected/16369a68-894c-4126-a90a-b41e2e66a590-kube-api-access-f4k7q\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:29 crc kubenswrapper[4958]: I1008 06:51:29.978504 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.183611 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spvp9"] Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.194039 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.202111 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 06:51:30 crc kubenswrapper[4958]: W1008 06:51:30.243696 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1091f62d_2fa6_4b93_87ce_8c0fbcc23987.slice/crio-4976781f0e0a125509b0d9ce8468e8baf512290701a9a3006a90c2b1b6b2891d WatchSource:0}: Error finding container 4976781f0e0a125509b0d9ce8468e8baf512290701a9a3006a90c2b1b6b2891d: Status 404 returned error can't find the container with id 4976781f0e0a125509b0d9ce8468e8baf512290701a9a3006a90c2b1b6b2891d Oct 08 06:51:30 crc kubenswrapper[4958]: W1008 06:51:30.252978 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5c6fa9b_cd66_4762_a0b9_3a5fd8c026b7.slice/crio-3cd6402ff4ac1f3d36d58b1c18b30dfe0aacd9a465588b158e2c5e3ce04e53f6 WatchSource:0}: Error finding container 3cd6402ff4ac1f3d36d58b1c18b30dfe0aacd9a465588b158e2c5e3ce04e53f6: Status 404 returned error can't find the container with id 3cd6402ff4ac1f3d36d58b1c18b30dfe0aacd9a465588b158e2c5e3ce04e53f6 Oct 08 06:51:30 crc kubenswrapper[4958]: W1008 06:51:30.257243 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5afba053_ce3d_4e27_a16f_35dff8f0407c.slice/crio-6710c933264129bf15adb818961d3e206b937d73de08c6f9f1b42e47593b73bc WatchSource:0}: Error finding container 6710c933264129bf15adb818961d3e206b937d73de08c6f9f1b42e47593b73bc: Status 404 returned error can't find the container with id 6710c933264129bf15adb818961d3e206b937d73de08c6f9f1b42e47593b73bc Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.323060 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 06:51:30 crc kubenswrapper[4958]: W1008 06:51:30.335070 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf285e309_c3e6_42ce_9f95_8302079cfd71.slice/crio-2c8266d18ea0ed418fc8ba9806be17c4d5db0ed9a5d5e700f44fbd75eb24dc61 WatchSource:0}: Error finding container 2c8266d18ea0ed418fc8ba9806be17c4d5db0ed9a5d5e700f44fbd75eb24dc61: Status 404 returned error can't find the container with id 2c8266d18ea0ed418fc8ba9806be17c4d5db0ed9a5d5e700f44fbd75eb24dc61 Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.398424 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.398469 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-wwpgx" event={"ID":"24788c52-cff2-4a7e-9b4f-fab66b1ddaa2","Type":"ContainerDied","Data":"67fab018229103c432fe6c75790d0739b01b7545f13f72cca8ac81171da2e198"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.400347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" event={"ID":"16369a68-894c-4126-a90a-b41e2e66a590","Type":"ContainerDied","Data":"c0c716c14d5d86b15a829472468e16ba543ef6bab032b9ed2bc2ad5d817650ce"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.400454 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-zxfj5" Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.417929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9" event={"ID":"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7","Type":"ContainerStarted","Data":"3cd6402ff4ac1f3d36d58b1c18b30dfe0aacd9a465588b158e2c5e3ce04e53f6"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.419736 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73dcc7e5-2f70-403a-abe2-2c67260864eb","Type":"ContainerStarted","Data":"49f6535634718ec7220f6d832efe30f9b0dfe3e685ea662ca725a467af190cf7"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.421812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1091f62d-2fa6-4b93-87ce-8c0fbcc23987","Type":"ContainerStarted","Data":"4976781f0e0a125509b0d9ce8468e8baf512290701a9a3006a90c2b1b6b2891d"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.421917 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p4rkp"] Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.423409 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"312426b0-8fb6-48ad-ba99-79b87cfcac38","Type":"ContainerStarted","Data":"245d3bf5624e88af094989424304b1fc981976ce6ea56d3e8ca03473ad6bfea9"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.424750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f285e309-c3e6-42ce-9f95-8302079cfd71","Type":"ContainerStarted","Data":"2c8266d18ea0ed418fc8ba9806be17c4d5db0ed9a5d5e700f44fbd75eb24dc61"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.426098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5afba053-ce3d-4e27-a16f-35dff8f0407c","Type":"ContainerStarted","Data":"6710c933264129bf15adb818961d3e206b937d73de08c6f9f1b42e47593b73bc"} Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.794449 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwpgx"] Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.804655 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-wwpgx"] Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.830622 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-zxfj5"] Oct 08 06:51:30 crc kubenswrapper[4958]: I1008 06:51:30.837773 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-zxfj5"] Oct 08 06:51:31 crc kubenswrapper[4958]: I1008 06:51:31.124205 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 06:51:31 crc kubenswrapper[4958]: I1008 06:51:31.438688 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f931d71-9f8f-4755-a793-ca326e423199","Type":"ContainerStarted","Data":"0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c"} Oct 08 06:51:31 crc kubenswrapper[4958]: I1008 06:51:31.442753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"442b1534-27bc-4d6d-be46-1ea5689c290f","Type":"ContainerStarted","Data":"5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971"} Oct 08 06:51:31 crc kubenswrapper[4958]: I1008 06:51:31.594699 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16369a68-894c-4126-a90a-b41e2e66a590" path="/var/lib/kubelet/pods/16369a68-894c-4126-a90a-b41e2e66a590/volumes" Oct 08 06:51:31 crc kubenswrapper[4958]: I1008 06:51:31.596099 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24788c52-cff2-4a7e-9b4f-fab66b1ddaa2" path="/var/lib/kubelet/pods/24788c52-cff2-4a7e-9b4f-fab66b1ddaa2/volumes" Oct 08 06:51:33 crc kubenswrapper[4958]: I1008 06:51:33.463905 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6269a952-e10d-442f-8d9f-135e16244e83","Type":"ContainerStarted","Data":"82989b317ef552d6740b405555b3ca3cd671d9b3456c6e550fdd36203bf74f89"} Oct 08 06:51:33 crc kubenswrapper[4958]: I1008 06:51:33.466364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerStarted","Data":"49cbc21bca73a06a594e876d0d902080856b147afb42af74b4b07bcdfa970ccc"} Oct 08 06:51:36 crc kubenswrapper[4958]: I1008 06:51:36.844719 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:51:36 crc kubenswrapper[4958]: I1008 06:51:36.845145 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.527704 4958 generic.go:334] "Generic (PLEG): container finished" podID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerID="e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806" exitCode=0 Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.528313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerDied","Data":"e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.540148 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f285e309-c3e6-42ce-9f95-8302079cfd71","Type":"ContainerStarted","Data":"50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.549510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5afba053-ce3d-4e27-a16f-35dff8f0407c","Type":"ContainerStarted","Data":"1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.560430 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9" event={"ID":"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7","Type":"ContainerStarted","Data":"b2fa4137f0b797f8d0358b35176b6e4fe70f8f968a53ff5dc07e48e43e721d8a"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.560727 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-spvp9" Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.573443 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73dcc7e5-2f70-403a-abe2-2c67260864eb","Type":"ContainerStarted","Data":"e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.574324 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.581506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1091f62d-2fa6-4b93-87ce-8c0fbcc23987","Type":"ContainerStarted","Data":"0f9dbe6fbcf00bbedd4e86c961d6a046db71fce83d895c53d0612bdd24041e8b"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.589526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"312426b0-8fb6-48ad-ba99-79b87cfcac38","Type":"ContainerStarted","Data":"77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.590137 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.592444 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6269a952-e10d-442f-8d9f-135e16244e83","Type":"ContainerStarted","Data":"fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3"} Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.640768 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.914213336 podStartE2EDuration="21.640749815s" podCreationTimestamp="2025-10-08 06:51:19 +0000 UTC" firstStartedPulling="2025-10-08 06:51:29.951261496 +0000 UTC m=+1033.080954107" lastFinishedPulling="2025-10-08 06:51:39.677797965 +0000 UTC m=+1042.807490586" observedRunningTime="2025-10-08 06:51:40.63947636 +0000 UTC m=+1043.769168981" watchObservedRunningTime="2025-10-08 06:51:40.640749815 +0000 UTC m=+1043.770442416" Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.669394 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-spvp9" podStartSLOduration=7.7825624300000005 podStartE2EDuration="16.669375867s" podCreationTimestamp="2025-10-08 06:51:24 +0000 UTC" firstStartedPulling="2025-10-08 06:51:30.261763555 +0000 UTC m=+1033.391456166" lastFinishedPulling="2025-10-08 06:51:39.148577002 +0000 UTC m=+1042.278269603" observedRunningTime="2025-10-08 06:51:40.661816543 +0000 UTC m=+1043.791509144" watchObservedRunningTime="2025-10-08 06:51:40.669375867 +0000 UTC m=+1043.799068468" Oct 08 06:51:40 crc kubenswrapper[4958]: I1008 06:51:40.705822 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.041567091 podStartE2EDuration="23.70580584s" podCreationTimestamp="2025-10-08 06:51:17 +0000 UTC" firstStartedPulling="2025-10-08 06:51:29.953194948 +0000 UTC m=+1033.082887549" lastFinishedPulling="2025-10-08 06:51:38.617433697 +0000 UTC m=+1041.747126298" observedRunningTime="2025-10-08 06:51:40.699014847 +0000 UTC m=+1043.828707448" watchObservedRunningTime="2025-10-08 06:51:40.70580584 +0000 UTC m=+1043.835498441" Oct 08 06:51:41 crc kubenswrapper[4958]: I1008 06:51:41.603463 4958 generic.go:334] "Generic (PLEG): container finished" podID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerID="b3571135ce73aad6bffab3e31245ba3ea03f5a31be902b37cbb6ab939405f8f1" exitCode=0 Oct 08 06:51:41 crc kubenswrapper[4958]: I1008 06:51:41.603568 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" event={"ID":"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73","Type":"ContainerDied","Data":"b3571135ce73aad6bffab3e31245ba3ea03f5a31be902b37cbb6ab939405f8f1"} Oct 08 06:51:41 crc kubenswrapper[4958]: I1008 06:51:41.606427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerStarted","Data":"b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea"} Oct 08 06:51:41 crc kubenswrapper[4958]: I1008 06:51:41.606451 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerStarted","Data":"227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b"} Oct 08 06:51:41 crc kubenswrapper[4958]: I1008 06:51:41.643707 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-p4rkp" podStartSLOduration=12.02281929 podStartE2EDuration="17.643685852s" podCreationTimestamp="2025-10-08 06:51:24 +0000 UTC" firstStartedPulling="2025-10-08 06:51:33.441998157 +0000 UTC m=+1036.571690798" lastFinishedPulling="2025-10-08 06:51:39.062864749 +0000 UTC m=+1042.192557360" observedRunningTime="2025-10-08 06:51:41.641520254 +0000 UTC m=+1044.771212855" watchObservedRunningTime="2025-10-08 06:51:41.643685852 +0000 UTC m=+1044.773378463" Oct 08 06:51:42 crc kubenswrapper[4958]: I1008 06:51:42.616536 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:42 crc kubenswrapper[4958]: I1008 06:51:42.616625 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.632329 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f285e309-c3e6-42ce-9f95-8302079cfd71","Type":"ContainerStarted","Data":"1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693"} Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.636207 4958 generic.go:334] "Generic (PLEG): container finished" podID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerID="1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed" exitCode=0 Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.636305 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5afba053-ce3d-4e27-a16f-35dff8f0407c","Type":"ContainerDied","Data":"1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed"} Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.638729 4958 generic.go:334] "Generic (PLEG): container finished" podID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerID="0f9dbe6fbcf00bbedd4e86c961d6a046db71fce83d895c53d0612bdd24041e8b" exitCode=0 Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.638819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1091f62d-2fa6-4b93-87ce-8c0fbcc23987","Type":"ContainerDied","Data":"0f9dbe6fbcf00bbedd4e86c961d6a046db71fce83d895c53d0612bdd24041e8b"} Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.642802 4958 generic.go:334] "Generic (PLEG): container finished" podID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerID="e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3" exitCode=0 Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.643013 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-jzfqg" event={"ID":"cbc98b67-c003-4e9e-ad52-f4a5b66106fd","Type":"ContainerDied","Data":"e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3"} Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.646362 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" event={"ID":"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73","Type":"ContainerStarted","Data":"6042030641dd242e771dc6633c2e14794d6f61c386e530267b2484e7892e6bc9"} Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.646909 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.651089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6269a952-e10d-442f-8d9f-135e16244e83","Type":"ContainerStarted","Data":"81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd"} Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.669738 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.007882072 podStartE2EDuration="18.669711103s" podCreationTimestamp="2025-10-08 06:51:25 +0000 UTC" firstStartedPulling="2025-10-08 06:51:30.337704305 +0000 UTC m=+1033.467396916" lastFinishedPulling="2025-10-08 06:51:42.999533336 +0000 UTC m=+1046.129225947" observedRunningTime="2025-10-08 06:51:43.659673072 +0000 UTC m=+1046.789365683" watchObservedRunningTime="2025-10-08 06:51:43.669711103 +0000 UTC m=+1046.799403734" Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.713638 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" podStartSLOduration=4.043074518 podStartE2EDuration="32.713609068s" podCreationTimestamp="2025-10-08 06:51:11 +0000 UTC" firstStartedPulling="2025-10-08 06:51:12.367435094 +0000 UTC m=+1015.497127695" lastFinishedPulling="2025-10-08 06:51:41.037969644 +0000 UTC m=+1044.167662245" observedRunningTime="2025-10-08 06:51:43.694520563 +0000 UTC m=+1046.824213164" watchObservedRunningTime="2025-10-08 06:51:43.713609068 +0000 UTC m=+1046.843301679" Oct 08 06:51:43 crc kubenswrapper[4958]: I1008 06:51:43.795139 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.265155094 podStartE2EDuration="20.795117808s" podCreationTimestamp="2025-10-08 06:51:23 +0000 UTC" firstStartedPulling="2025-10-08 06:51:33.445498972 +0000 UTC m=+1036.575191613" lastFinishedPulling="2025-10-08 06:51:42.975461716 +0000 UTC m=+1046.105154327" observedRunningTime="2025-10-08 06:51:43.79000148 +0000 UTC m=+1046.919694091" watchObservedRunningTime="2025-10-08 06:51:43.795117808 +0000 UTC m=+1046.924810429" Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.665077 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1091f62d-2fa6-4b93-87ce-8c0fbcc23987","Type":"ContainerStarted","Data":"cc0ce848d587944d5646e6c4d3f0968fdc06fb139bd8f7158d1244e8f7e79bb0"} Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.669679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-jzfqg" event={"ID":"cbc98b67-c003-4e9e-ad52-f4a5b66106fd","Type":"ContainerStarted","Data":"8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415"} Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.670474 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.673552 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5afba053-ce3d-4e27-a16f-35dff8f0407c","Type":"ContainerStarted","Data":"6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328"} Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.678341 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.703631 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.891753843 podStartE2EDuration="29.703602956s" podCreationTimestamp="2025-10-08 06:51:15 +0000 UTC" firstStartedPulling="2025-10-08 06:51:30.251040396 +0000 UTC m=+1033.380733027" lastFinishedPulling="2025-10-08 06:51:39.062889509 +0000 UTC m=+1042.192582140" observedRunningTime="2025-10-08 06:51:44.695987711 +0000 UTC m=+1047.825680342" watchObservedRunningTime="2025-10-08 06:51:44.703602956 +0000 UTC m=+1047.833295587" Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.735409 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.422118867000002 podStartE2EDuration="29.735378124s" podCreationTimestamp="2025-10-08 06:51:15 +0000 UTC" firstStartedPulling="2025-10-08 06:51:30.262916306 +0000 UTC m=+1033.392608917" lastFinishedPulling="2025-10-08 06:51:39.576175533 +0000 UTC m=+1042.705868174" observedRunningTime="2025-10-08 06:51:44.731270513 +0000 UTC m=+1047.860963174" watchObservedRunningTime="2025-10-08 06:51:44.735378124 +0000 UTC m=+1047.865070765" Oct 08 06:51:44 crc kubenswrapper[4958]: I1008 06:51:44.761438 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77597f887-jzfqg" podStartSLOduration=-9223372004.093369 podStartE2EDuration="32.761406636s" podCreationTimestamp="2025-10-08 06:51:12 +0000 UTC" firstStartedPulling="2025-10-08 06:51:13.464159324 +0000 UTC m=+1016.593851935" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:51:44.750359238 +0000 UTC m=+1047.880051879" watchObservedRunningTime="2025-10-08 06:51:44.761406636 +0000 UTC m=+1047.891099267" Oct 08 06:51:45 crc kubenswrapper[4958]: I1008 06:51:45.136821 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:45 crc kubenswrapper[4958]: I1008 06:51:45.213168 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:45 crc kubenswrapper[4958]: I1008 06:51:45.678776 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:45 crc kubenswrapper[4958]: I1008 06:51:45.682739 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:45 crc kubenswrapper[4958]: I1008 06:51:45.793327 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 08 06:51:45 crc kubenswrapper[4958]: I1008 06:51:45.794790 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.067710 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-rt64c"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.067968 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerName="dnsmasq-dns" containerID="cri-o://6042030641dd242e771dc6633c2e14794d6f61c386e530267b2484e7892e6bc9" gracePeriod=10 Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.097526 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-f6fs4"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.098722 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.101323 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.116306 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-f6fs4"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.133196 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tvw22"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.137937 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.142560 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.169706 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tvw22"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.206125 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-config\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.206173 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.206272 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5sd\" (UniqueName: \"kubernetes.io/projected/e9e3ead6-373e-402f-9180-bd64edf7dcb5-kube-api-access-hf5sd\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.206339 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307608 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-config\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307665 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovn-rundir\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x7qp\" (UniqueName: \"kubernetes.io/projected/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-kube-api-access-2x7qp\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovs-rundir\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307766 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-config\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307786 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307804 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5sd\" (UniqueName: \"kubernetes.io/projected/e9e3ead6-373e-402f-9180-bd64edf7dcb5-kube-api-access-hf5sd\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.307882 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.308840 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.308986 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.309142 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-config\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.322620 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-jzfqg"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.333022 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5sd\" (UniqueName: \"kubernetes.io/projected/e9e3ead6-373e-402f-9180-bd64edf7dcb5-kube-api-access-hf5sd\") pod \"dnsmasq-dns-545fb8c44f-f6fs4\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.394741 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-68znj"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.396270 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.401216 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.408939 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.409018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-config\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.409057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovn-rundir\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.409073 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x7qp\" (UniqueName: \"kubernetes.io/projected/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-kube-api-access-2x7qp\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.409089 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovs-rundir\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.409131 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.409618 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovn-rundir\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.410246 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-config\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.410488 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovs-rundir\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.415886 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.427711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x7qp\" (UniqueName: \"kubernetes.io/projected/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-kube-api-access-2x7qp\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.433462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tvw22\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.472507 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.494215 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-68znj"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.510242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.510327 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.510375 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.510411 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdhh8\" (UniqueName: \"kubernetes.io/projected/45057cd4-4145-481a-ae2d-ebf2499db002-kube-api-access-jdhh8\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.510474 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-config\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.515202 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.616745 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdhh8\" (UniqueName: \"kubernetes.io/projected/45057cd4-4145-481a-ae2d-ebf2499db002-kube-api-access-jdhh8\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.617174 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-config\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.617931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-config\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.617210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.618019 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.618507 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.618570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.619045 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.619201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.643880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdhh8\" (UniqueName: \"kubernetes.io/projected/45057cd4-4145-481a-ae2d-ebf2499db002-kube-api-access-jdhh8\") pod \"dnsmasq-dns-dc9d58d7-68znj\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.708805 4958 generic.go:334] "Generic (PLEG): container finished" podID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerID="6042030641dd242e771dc6633c2e14794d6f61c386e530267b2484e7892e6bc9" exitCode=0 Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.709013 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77597f887-jzfqg" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerName="dnsmasq-dns" containerID="cri-o://8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415" gracePeriod=10 Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.709312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" event={"ID":"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73","Type":"ContainerDied","Data":"6042030641dd242e771dc6633c2e14794d6f61c386e530267b2484e7892e6bc9"} Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.736426 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.801348 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.802028 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.824422 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-dns-svc\") pod \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.824575 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfrh\" (UniqueName: \"kubernetes.io/projected/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-kube-api-access-kzfrh\") pod \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.824612 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-config\") pod \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\" (UID: \"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73\") " Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.835206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-kube-api-access-kzfrh" (OuterVolumeSpecName: "kube-api-access-kzfrh") pod "8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" (UID: "8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73"). InnerVolumeSpecName "kube-api-access-kzfrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.868778 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" (UID: "8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.870979 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.871031 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.881548 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tvw22"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.896732 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-config" (OuterVolumeSpecName: "config") pod "8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" (UID: "8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.926299 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.926340 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfrh\" (UniqueName: \"kubernetes.io/projected/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-kube-api-access-kzfrh\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.926355 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.964264 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 08 06:51:46 crc kubenswrapper[4958]: E1008 06:51:46.965702 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerName="dnsmasq-dns" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.965724 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerName="dnsmasq-dns" Oct 08 06:51:46 crc kubenswrapper[4958]: E1008 06:51:46.965760 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerName="init" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.965767 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerName="init" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.965939 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" containerName="dnsmasq-dns" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.967626 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.967648 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.967771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.970121 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.970284 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.970384 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ptsmx" Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.970665 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 06:51:46 crc kubenswrapper[4958]: I1008 06:51:46.970831 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.001374 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-f6fs4"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.028671 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-scripts\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.029489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.029720 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.029748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.030064 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-config\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.030257 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.030285 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh92x\" (UniqueName: \"kubernetes.io/projected/fe1279cb-5369-4347-9fc9-d598103536a9-kube-api-access-zh92x\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131260 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh92x\" (UniqueName: \"kubernetes.io/projected/fe1279cb-5369-4347-9fc9-d598103536a9-kube-api-access-zh92x\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131351 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-scripts\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131408 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.131425 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-config\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.132517 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-config\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.133187 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.133666 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-scripts\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.136314 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.136502 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.140580 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.149504 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh92x\" (UniqueName: \"kubernetes.io/projected/fe1279cb-5369-4347-9fc9-d598103536a9-kube-api-access-zh92x\") pod \"ovn-northd-0\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.149725 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.232965 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-dns-svc\") pod \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.233316 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-config\") pod \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.233395 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9r9f\" (UniqueName: \"kubernetes.io/projected/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-kube-api-access-j9r9f\") pod \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\" (UID: \"cbc98b67-c003-4e9e-ad52-f4a5b66106fd\") " Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.238620 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-kube-api-access-j9r9f" (OuterVolumeSpecName: "kube-api-access-j9r9f") pod "cbc98b67-c003-4e9e-ad52-f4a5b66106fd" (UID: "cbc98b67-c003-4e9e-ad52-f4a5b66106fd"). InnerVolumeSpecName "kube-api-access-j9r9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.267072 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cbc98b67-c003-4e9e-ad52-f4a5b66106fd" (UID: "cbc98b67-c003-4e9e-ad52-f4a5b66106fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.274136 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-config" (OuterVolumeSpecName: "config") pod "cbc98b67-c003-4e9e-ad52-f4a5b66106fd" (UID: "cbc98b67-c003-4e9e-ad52-f4a5b66106fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.291622 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.336442 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.336503 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.336524 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9r9f\" (UniqueName: \"kubernetes.io/projected/cbc98b67-c003-4e9e-ad52-f4a5b66106fd-kube-api-access-j9r9f\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.360554 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-68znj"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.717792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" event={"ID":"45057cd4-4145-481a-ae2d-ebf2499db002","Type":"ContainerDied","Data":"ec4f294888d20ab5c74dd52f7c42baeb35ae80694217a6cd02f0c1e925dc3c59"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.717638 4958 generic.go:334] "Generic (PLEG): container finished" podID="45057cd4-4145-481a-ae2d-ebf2499db002" containerID="ec4f294888d20ab5c74dd52f7c42baeb35ae80694217a6cd02f0c1e925dc3c59" exitCode=0 Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.718629 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" event={"ID":"45057cd4-4145-481a-ae2d-ebf2499db002","Type":"ContainerStarted","Data":"d19fa9fcd750818ad45c5ebc5892435706287d18f21ffdd6f53b276bf5cfa2d8"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.722148 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tvw22" event={"ID":"5971bc9c-45ee-4ccb-aef5-290f51ac13ba","Type":"ContainerStarted","Data":"f61ab36dcee33b75ccc3971394dfde5a04ad577077ba1fea21e678472718c630"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.722213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tvw22" event={"ID":"5971bc9c-45ee-4ccb-aef5-290f51ac13ba","Type":"ContainerStarted","Data":"fa9209f4d60550647f36c991017dd4905424536502aed8900172c404fe779aa3"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.724235 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerID="3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228" exitCode=0 Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.724330 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" event={"ID":"e9e3ead6-373e-402f-9180-bd64edf7dcb5","Type":"ContainerDied","Data":"3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.724398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" event={"ID":"e9e3ead6-373e-402f-9180-bd64edf7dcb5","Type":"ContainerStarted","Data":"076e10467d14435b2d8ef6836df74c04164c03953819544a414eb9d551d63a54"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.727279 4958 generic.go:334] "Generic (PLEG): container finished" podID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerID="8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415" exitCode=0 Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.727326 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-jzfqg" event={"ID":"cbc98b67-c003-4e9e-ad52-f4a5b66106fd","Type":"ContainerDied","Data":"8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.727346 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-jzfqg" event={"ID":"cbc98b67-c003-4e9e-ad52-f4a5b66106fd","Type":"ContainerDied","Data":"41f2246261b392655aff755d6192172e16cee2086ad82466e06eefbd48afc837"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.727365 4958 scope.go:117] "RemoveContainer" containerID="8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.727469 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-jzfqg" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.733802 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" event={"ID":"8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73","Type":"ContainerDied","Data":"7dbd3df062b83be66c3ae4b66733e95f1f02c586f363c007b40bbd2ec219c44e"} Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.733898 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8575fc99d7-rt64c" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.753788 4958 scope.go:117] "RemoveContainer" containerID="e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.772406 4958 scope.go:117] "RemoveContainer" containerID="8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415" Oct 08 06:51:47 crc kubenswrapper[4958]: E1008 06:51:47.772782 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415\": container with ID starting with 8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415 not found: ID does not exist" containerID="8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.772810 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415"} err="failed to get container status \"8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415\": rpc error: code = NotFound desc = could not find container \"8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415\": container with ID starting with 8a71e6180503e43f0e483e9b8f1880812de2a1bcd3d43a31177a487e87b96415 not found: ID does not exist" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.772848 4958 scope.go:117] "RemoveContainer" containerID="e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3" Oct 08 06:51:47 crc kubenswrapper[4958]: E1008 06:51:47.773142 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3\": container with ID starting with e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3 not found: ID does not exist" containerID="e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.773232 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3"} err="failed to get container status \"e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3\": rpc error: code = NotFound desc = could not find container \"e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3\": container with ID starting with e0d6584d08531af5add3d2a0328376824c486cb1a4d41b81db00bf695bd179b3 not found: ID does not exist" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.773261 4958 scope.go:117] "RemoveContainer" containerID="6042030641dd242e771dc6633c2e14794d6f61c386e530267b2484e7892e6bc9" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.797390 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-rt64c"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.810553 4958 scope.go:117] "RemoveContainer" containerID="b3571135ce73aad6bffab3e31245ba3ea03f5a31be902b37cbb6ab939405f8f1" Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.822588 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8575fc99d7-rt64c"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.828800 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-jzfqg"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.853266 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77597f887-jzfqg"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.862004 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 06:51:47 crc kubenswrapper[4958]: I1008 06:51:47.863526 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tvw22" podStartSLOduration=1.86350536 podStartE2EDuration="1.86350536s" podCreationTimestamp="2025-10-08 06:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:51:47.822537504 +0000 UTC m=+1050.952230105" watchObservedRunningTime="2025-10-08 06:51:47.86350536 +0000 UTC m=+1050.993197971" Oct 08 06:51:48 crc kubenswrapper[4958]: E1008 06:51:48.042230 4958 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.115:60660->38.102.83.115:46157: write tcp 38.102.83.115:60660->38.102.83.115:46157: write: broken pipe Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.088098 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.746027 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe1279cb-5369-4347-9fc9-d598103536a9","Type":"ContainerStarted","Data":"40764b3ce1a4474cb80450624f247364d57e0d77b952d3fae60f44503319b525"} Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.752839 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" event={"ID":"45057cd4-4145-481a-ae2d-ebf2499db002","Type":"ContainerStarted","Data":"224d2b087ec9c0e7b3c212dedf9c61134e51997b6183561462a6eca28b13ec5a"} Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.753142 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.756441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" event={"ID":"e9e3ead6-373e-402f-9180-bd64edf7dcb5","Type":"ContainerStarted","Data":"39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986"} Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.756696 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.773968 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" podStartSLOduration=2.773936821 podStartE2EDuration="2.773936821s" podCreationTimestamp="2025-10-08 06:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:51:48.770407795 +0000 UTC m=+1051.900100416" watchObservedRunningTime="2025-10-08 06:51:48.773936821 +0000 UTC m=+1051.903629422" Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.790439 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" podStartSLOduration=2.790421286 podStartE2EDuration="2.790421286s" podCreationTimestamp="2025-10-08 06:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:51:48.786403537 +0000 UTC m=+1051.916096138" watchObservedRunningTime="2025-10-08 06:51:48.790421286 +0000 UTC m=+1051.920113877" Oct 08 06:51:48 crc kubenswrapper[4958]: I1008 06:51:48.949663 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.017543 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.587448 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73" path="/var/lib/kubelet/pods/8c2f2e88-bf5d-4e2d-a7d3-4258c1263c73/volumes" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.588502 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" path="/var/lib/kubelet/pods/cbc98b67-c003-4e9e-ad52-f4a5b66106fd/volumes" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.764817 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe1279cb-5369-4347-9fc9-d598103536a9","Type":"ContainerStarted","Data":"758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538"} Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.764867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe1279cb-5369-4347-9fc9-d598103536a9","Type":"ContainerStarted","Data":"1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b"} Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.788918 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.387456832 podStartE2EDuration="3.788896414s" podCreationTimestamp="2025-10-08 06:51:46 +0000 UTC" firstStartedPulling="2025-10-08 06:51:47.822291188 +0000 UTC m=+1050.951983789" lastFinishedPulling="2025-10-08 06:51:49.22373077 +0000 UTC m=+1052.353423371" observedRunningTime="2025-10-08 06:51:49.783334474 +0000 UTC m=+1052.913027095" watchObservedRunningTime="2025-10-08 06:51:49.788896414 +0000 UTC m=+1052.918589015" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.939547 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-f6fs4"] Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.965280 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-c45dc"] Oct 08 06:51:49 crc kubenswrapper[4958]: E1008 06:51:49.965880 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerName="dnsmasq-dns" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.965979 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerName="dnsmasq-dns" Oct 08 06:51:49 crc kubenswrapper[4958]: E1008 06:51:49.966068 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerName="init" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.966131 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerName="init" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.966414 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc98b67-c003-4e9e-ad52-f4a5b66106fd" containerName="dnsmasq-dns" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.967471 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:49 crc kubenswrapper[4958]: I1008 06:51:49.971134 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-c45dc"] Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.009005 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.009053 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdk8t\" (UniqueName: \"kubernetes.io/projected/aecc5d72-3109-4787-aa1e-522e1bb7dda9-kube-api-access-rdk8t\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.009083 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-config\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.009222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.009322 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.011879 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.110144 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.110193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdk8t\" (UniqueName: \"kubernetes.io/projected/aecc5d72-3109-4787-aa1e-522e1bb7dda9-kube-api-access-rdk8t\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.110221 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-config\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.110266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.110294 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.111018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-dns-svc\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.111283 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-config\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.111444 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-sb\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.111692 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-nb\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.125648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdk8t\" (UniqueName: \"kubernetes.io/projected/aecc5d72-3109-4787-aa1e-522e1bb7dda9-kube-api-access-rdk8t\") pod \"dnsmasq-dns-7b587f8db7-c45dc\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.331174 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.773749 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.774128 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerName="dnsmasq-dns" containerID="cri-o://39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986" gracePeriod=10 Oct 08 06:51:50 crc kubenswrapper[4958]: I1008 06:51:50.800414 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-c45dc"] Oct 08 06:51:50 crc kubenswrapper[4958]: W1008 06:51:50.869325 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaecc5d72_3109_4787_aa1e_522e1bb7dda9.slice/crio-b187ce3ef36aef8989346d45c817936c6ef5f6c5647d2a4d44b6b0e662843c03 WatchSource:0}: Error finding container b187ce3ef36aef8989346d45c817936c6ef5f6c5647d2a4d44b6b0e662843c03: Status 404 returned error can't find the container with id b187ce3ef36aef8989346d45c817936c6ef5f6c5647d2a4d44b6b0e662843c03 Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.099651 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.125320 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.158917 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.159068 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.166153 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.166346 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.166493 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xs6jj" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.166625 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.188274 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.314234 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.336987 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-lock\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.337212 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-cache\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.337368 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.337477 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.337546 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbbv\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-kube-api-access-gbbbv\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.438598 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5sd\" (UniqueName: \"kubernetes.io/projected/e9e3ead6-373e-402f-9180-bd64edf7dcb5-kube-api-access-hf5sd\") pod \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.438689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-config\") pod \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.438736 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-dns-svc\") pod \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.438862 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-ovsdbserver-sb\") pod \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\" (UID: \"e9e3ead6-373e-402f-9180-bd64edf7dcb5\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.439038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.439141 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbbbv\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-kube-api-access-gbbbv\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.439189 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-lock\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.439211 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-cache\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.439262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.439404 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.439422 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.439464 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift podName:6c45aa0e-9caf-42e6-bfbb-59c802d81c98 nodeName:}" failed. No retries permitted until 2025-10-08 06:51:51.939447871 +0000 UTC m=+1055.069140472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift") pod "swift-storage-0" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98") : configmap "swift-ring-files" not found Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.440211 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.445014 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-lock\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.445755 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-cache\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.451162 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9e3ead6-373e-402f-9180-bd64edf7dcb5-kube-api-access-hf5sd" (OuterVolumeSpecName: "kube-api-access-hf5sd") pod "e9e3ead6-373e-402f-9180-bd64edf7dcb5" (UID: "e9e3ead6-373e-402f-9180-bd64edf7dcb5"). InnerVolumeSpecName "kube-api-access-hf5sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.469047 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.469511 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbbbv\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-kube-api-access-gbbbv\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.483539 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-config" (OuterVolumeSpecName: "config") pod "e9e3ead6-373e-402f-9180-bd64edf7dcb5" (UID: "e9e3ead6-373e-402f-9180-bd64edf7dcb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.484574 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9e3ead6-373e-402f-9180-bd64edf7dcb5" (UID: "e9e3ead6-373e-402f-9180-bd64edf7dcb5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.489779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9e3ead6-373e-402f-9180-bd64edf7dcb5" (UID: "e9e3ead6-373e-402f-9180-bd64edf7dcb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.540669 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.540704 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5sd\" (UniqueName: \"kubernetes.io/projected/e9e3ead6-373e-402f-9180-bd64edf7dcb5-kube-api-access-hf5sd\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.540714 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.540723 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9e3ead6-373e-402f-9180-bd64edf7dcb5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.549690 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nlpr4"] Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.550020 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerName="init" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.550037 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerName="init" Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.550047 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerName="dnsmasq-dns" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.550053 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerName="dnsmasq-dns" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.550188 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerName="dnsmasq-dns" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.550648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.552555 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.552668 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.555241 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.590239 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nlpr4"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.596304 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-42fpn"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.597313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.599816 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nlpr4"] Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.600486 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-ldnf4 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-nlpr4" podUID="c610acfd-df56-455f-904c-999e6d33d87c" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.604865 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-42fpn"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.642490 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-swiftconf\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.642532 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-dispersionconf\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.642560 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldnf4\" (UniqueName: \"kubernetes.io/projected/c610acfd-df56-455f-904c-999e6d33d87c-kube-api-access-ldnf4\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.642772 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c610acfd-df56-455f-904c-999e6d33d87c-etc-swift\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.643016 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-scripts\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.643092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-ring-data-devices\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.643171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-combined-ca-bundle\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-swiftconf\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744271 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-swiftconf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-dispersionconf\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744324 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldnf4\" (UniqueName: \"kubernetes.io/projected/c610acfd-df56-455f-904c-999e6d33d87c-kube-api-access-ldnf4\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744474 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-ring-data-devices\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c610acfd-df56-455f-904c-999e6d33d87c-etc-swift\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744644 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqqf\" (UniqueName: \"kubernetes.io/projected/bfa59ca9-7979-4892-90d5-6b4f8b374583-kube-api-access-4zqqf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-combined-ca-bundle\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744843 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bfa59ca9-7979-4892-90d5-6b4f8b374583-etc-swift\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.744943 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-scripts\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.745077 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-scripts\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.745120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-ring-data-devices\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.745203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-combined-ca-bundle\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.745279 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-dispersionconf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.746704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-ring-data-devices\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.746867 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c610acfd-df56-455f-904c-999e6d33d87c-etc-swift\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.747656 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-scripts\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.761278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-dispersionconf\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.761456 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-swiftconf\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.762052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-combined-ca-bundle\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.792185 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldnf4\" (UniqueName: \"kubernetes.io/projected/c610acfd-df56-455f-904c-999e6d33d87c-kube-api-access-ldnf4\") pod \"swift-ring-rebalance-nlpr4\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.800839 4958 generic.go:334] "Generic (PLEG): container finished" podID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerID="b71aeb1391f5793f19265177d9172dab21dae4f05d16682e7701c6efb915d5d6" exitCode=0 Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.804713 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" event={"ID":"aecc5d72-3109-4787-aa1e-522e1bb7dda9","Type":"ContainerDied","Data":"b71aeb1391f5793f19265177d9172dab21dae4f05d16682e7701c6efb915d5d6"} Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.804775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" event={"ID":"aecc5d72-3109-4787-aa1e-522e1bb7dda9","Type":"ContainerStarted","Data":"b187ce3ef36aef8989346d45c817936c6ef5f6c5647d2a4d44b6b0e662843c03"} Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.807780 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" containerID="39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986" exitCode=0 Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.807859 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.807897 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.807990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" event={"ID":"e9e3ead6-373e-402f-9180-bd64edf7dcb5","Type":"ContainerDied","Data":"39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986"} Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.808027 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-f6fs4" event={"ID":"e9e3ead6-373e-402f-9180-bd64edf7dcb5","Type":"ContainerDied","Data":"076e10467d14435b2d8ef6836df74c04164c03953819544a414eb9d551d63a54"} Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.808056 4958 scope.go:117] "RemoveContainer" containerID="39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.831525 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-combined-ca-bundle\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847397 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bfa59ca9-7979-4892-90d5-6b4f8b374583-etc-swift\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847470 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-scripts\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-dispersionconf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847582 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-swiftconf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-ring-data-devices\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.847713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqqf\" (UniqueName: \"kubernetes.io/projected/bfa59ca9-7979-4892-90d5-6b4f8b374583-kube-api-access-4zqqf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.848731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bfa59ca9-7979-4892-90d5-6b4f8b374583-etc-swift\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.848927 4958 scope.go:117] "RemoveContainer" containerID="3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.849336 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-scripts\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.849524 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-ring-data-devices\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.849604 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-f6fs4"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.851389 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-dispersionconf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.854278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-swiftconf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.856974 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-combined-ca-bundle\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.867534 4958 scope.go:117] "RemoveContainer" containerID="39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986" Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.868050 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986\": container with ID starting with 39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986 not found: ID does not exist" containerID="39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.868100 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986"} err="failed to get container status \"39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986\": rpc error: code = NotFound desc = could not find container \"39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986\": container with ID starting with 39677ef6dc9497539ec9e650dbe8144cdab90f8303ff39c25a0178050d510986 not found: ID does not exist" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.868135 4958 scope.go:117] "RemoveContainer" containerID="3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228" Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.868521 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228\": container with ID starting with 3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228 not found: ID does not exist" containerID="3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.868563 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228"} err="failed to get container status \"3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228\": rpc error: code = NotFound desc = could not find container \"3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228\": container with ID starting with 3a9c1b75668069a94b0d305bb9462e3755e3ca8ea13726e686f169a35afa6228 not found: ID does not exist" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.869138 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-f6fs4"] Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.875226 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqqf\" (UniqueName: \"kubernetes.io/projected/bfa59ca9-7979-4892-90d5-6b4f8b374583-kube-api-access-4zqqf\") pod \"swift-ring-rebalance-42fpn\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.915964 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957435 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-combined-ca-bundle\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957500 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-scripts\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldnf4\" (UniqueName: \"kubernetes.io/projected/c610acfd-df56-455f-904c-999e6d33d87c-kube-api-access-ldnf4\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957641 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-ring-data-devices\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957743 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-swiftconf\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957839 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-dispersionconf\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.957906 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c610acfd-df56-455f-904c-999e6d33d87c-etc-swift\") pod \"c610acfd-df56-455f-904c-999e6d33d87c\" (UID: \"c610acfd-df56-455f-904c-999e6d33d87c\") " Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.958339 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.958586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.958575 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-scripts" (OuterVolumeSpecName: "scripts") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.958899 4958 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.958928 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c610acfd-df56-455f-904c-999e6d33d87c-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.959172 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c610acfd-df56-455f-904c-999e6d33d87c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.959802 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.959834 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 06:51:51 crc kubenswrapper[4958]: E1008 06:51:51.959895 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift podName:6c45aa0e-9caf-42e6-bfbb-59c802d81c98 nodeName:}" failed. No retries permitted until 2025-10-08 06:51:52.959871516 +0000 UTC m=+1056.089564187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift") pod "swift-storage-0" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98") : configmap "swift-ring-files" not found Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.963890 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.964040 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c610acfd-df56-455f-904c-999e6d33d87c-kube-api-access-ldnf4" (OuterVolumeSpecName: "kube-api-access-ldnf4") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "kube-api-access-ldnf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.967872 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:51:51 crc kubenswrapper[4958]: I1008 06:51:51.970743 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c610acfd-df56-455f-904c-999e6d33d87c" (UID: "c610acfd-df56-455f-904c-999e6d33d87c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.061132 4958 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.061409 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c610acfd-df56-455f-904c-999e6d33d87c-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.061421 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.061437 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldnf4\" (UniqueName: \"kubernetes.io/projected/c610acfd-df56-455f-904c-999e6d33d87c-kube-api-access-ldnf4\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.061452 4958 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c610acfd-df56-455f-904c-999e6d33d87c-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 06:51:52 crc kubenswrapper[4958]: W1008 06:51:52.395244 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa59ca9_7979_4892_90d5_6b4f8b374583.slice/crio-d6ea36da72e91d937bdeba56ac28ef99a4c06761b6cbf4bae2a68e219877a888 WatchSource:0}: Error finding container d6ea36da72e91d937bdeba56ac28ef99a4c06761b6cbf4bae2a68e219877a888: Status 404 returned error can't find the container with id d6ea36da72e91d937bdeba56ac28ef99a4c06761b6cbf4bae2a68e219877a888 Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.395448 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-42fpn"] Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.828649 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" event={"ID":"aecc5d72-3109-4787-aa1e-522e1bb7dda9","Type":"ContainerStarted","Data":"47c22f0c3aadfa42bf6f78218f5d4c8eb40a2ebb79bdf78880418e2c33cb5304"} Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.830624 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.833600 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlpr4" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.834128 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42fpn" event={"ID":"bfa59ca9-7979-4892-90d5-6b4f8b374583","Type":"ContainerStarted","Data":"d6ea36da72e91d937bdeba56ac28ef99a4c06761b6cbf4bae2a68e219877a888"} Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.866816 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" podStartSLOduration=3.866790233 podStartE2EDuration="3.866790233s" podCreationTimestamp="2025-10-08 06:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:51:52.862810865 +0000 UTC m=+1055.992503496" watchObservedRunningTime="2025-10-08 06:51:52.866790233 +0000 UTC m=+1055.996482834" Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.898179 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nlpr4"] Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.902341 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nlpr4"] Oct 08 06:51:52 crc kubenswrapper[4958]: I1008 06:51:52.980876 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:52 crc kubenswrapper[4958]: E1008 06:51:52.981614 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 06:51:52 crc kubenswrapper[4958]: E1008 06:51:52.981716 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 06:51:52 crc kubenswrapper[4958]: E1008 06:51:52.981801 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift podName:6c45aa0e-9caf-42e6-bfbb-59c802d81c98 nodeName:}" failed. No retries permitted until 2025-10-08 06:51:54.981771976 +0000 UTC m=+1058.111464577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift") pod "swift-storage-0" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98") : configmap "swift-ring-files" not found Oct 08 06:51:53 crc kubenswrapper[4958]: I1008 06:51:53.591320 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c610acfd-df56-455f-904c-999e6d33d87c" path="/var/lib/kubelet/pods/c610acfd-df56-455f-904c-999e6d33d87c/volumes" Oct 08 06:51:53 crc kubenswrapper[4958]: I1008 06:51:53.595290 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9e3ead6-373e-402f-9180-bd64edf7dcb5" path="/var/lib/kubelet/pods/e9e3ead6-373e-402f-9180-bd64edf7dcb5/volumes" Oct 08 06:51:55 crc kubenswrapper[4958]: I1008 06:51:55.027688 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:55 crc kubenswrapper[4958]: E1008 06:51:55.027981 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 06:51:55 crc kubenswrapper[4958]: E1008 06:51:55.028177 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 06:51:55 crc kubenswrapper[4958]: E1008 06:51:55.028241 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift podName:6c45aa0e-9caf-42e6-bfbb-59c802d81c98 nodeName:}" failed. No retries permitted until 2025-10-08 06:51:59.028222369 +0000 UTC m=+1062.157914970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift") pod "swift-storage-0" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98") : configmap "swift-ring-files" not found Oct 08 06:51:55 crc kubenswrapper[4958]: I1008 06:51:55.865880 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42fpn" event={"ID":"bfa59ca9-7979-4892-90d5-6b4f8b374583","Type":"ContainerStarted","Data":"1c5177a5e7f56e770d75063c504837fbc10992e43fba0322e87a21a8777d8212"} Oct 08 06:51:55 crc kubenswrapper[4958]: I1008 06:51:55.890862 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-42fpn" podStartSLOduration=1.7176668579999999 podStartE2EDuration="4.890838139s" podCreationTimestamp="2025-10-08 06:51:51 +0000 UTC" firstStartedPulling="2025-10-08 06:51:52.398412072 +0000 UTC m=+1055.528104673" lastFinishedPulling="2025-10-08 06:51:55.571583343 +0000 UTC m=+1058.701275954" observedRunningTime="2025-10-08 06:51:55.889033181 +0000 UTC m=+1059.018725812" watchObservedRunningTime="2025-10-08 06:51:55.890838139 +0000 UTC m=+1059.020530780" Oct 08 06:51:56 crc kubenswrapper[4958]: I1008 06:51:56.803106 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.775161 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5vd9f"] Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.776118 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5vd9f" Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.797365 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5vd9f"] Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.887803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v7zd\" (UniqueName: \"kubernetes.io/projected/4afa2864-e8e7-4789-b6d1-e1608724bcff-kube-api-access-7v7zd\") pod \"keystone-db-create-5vd9f\" (UID: \"4afa2864-e8e7-4789-b6d1-e1608724bcff\") " pod="openstack/keystone-db-create-5vd9f" Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.990455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7zd\" (UniqueName: \"kubernetes.io/projected/4afa2864-e8e7-4789-b6d1-e1608724bcff-kube-api-access-7v7zd\") pod \"keystone-db-create-5vd9f\" (UID: \"4afa2864-e8e7-4789-b6d1-e1608724bcff\") " pod="openstack/keystone-db-create-5vd9f" Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.992451 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-z2vpl"] Oct 08 06:51:57 crc kubenswrapper[4958]: I1008 06:51:57.994326 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z2vpl" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.019167 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7zd\" (UniqueName: \"kubernetes.io/projected/4afa2864-e8e7-4789-b6d1-e1608724bcff-kube-api-access-7v7zd\") pod \"keystone-db-create-5vd9f\" (UID: \"4afa2864-e8e7-4789-b6d1-e1608724bcff\") " pod="openstack/keystone-db-create-5vd9f" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.037827 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z2vpl"] Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.091767 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n5d\" (UniqueName: \"kubernetes.io/projected/31307abc-8d4b-4e83-a624-1720b6b342c4-kube-api-access-r9n5d\") pod \"placement-db-create-z2vpl\" (UID: \"31307abc-8d4b-4e83-a624-1720b6b342c4\") " pod="openstack/placement-db-create-z2vpl" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.101406 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5vd9f" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.194675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9n5d\" (UniqueName: \"kubernetes.io/projected/31307abc-8d4b-4e83-a624-1720b6b342c4-kube-api-access-r9n5d\") pod \"placement-db-create-z2vpl\" (UID: \"31307abc-8d4b-4e83-a624-1720b6b342c4\") " pod="openstack/placement-db-create-z2vpl" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.230912 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4rgqh"] Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.232198 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rgqh" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.239274 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4rgqh"] Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.239861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9n5d\" (UniqueName: \"kubernetes.io/projected/31307abc-8d4b-4e83-a624-1720b6b342c4-kube-api-access-r9n5d\") pod \"placement-db-create-z2vpl\" (UID: \"31307abc-8d4b-4e83-a624-1720b6b342c4\") " pod="openstack/placement-db-create-z2vpl" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.320075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z2vpl" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.399046 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6xm\" (UniqueName: \"kubernetes.io/projected/1c4975c7-bc87-45a0-8bd9-878d3610ecc4-kube-api-access-dh6xm\") pod \"glance-db-create-4rgqh\" (UID: \"1c4975c7-bc87-45a0-8bd9-878d3610ecc4\") " pod="openstack/glance-db-create-4rgqh" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.500844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh6xm\" (UniqueName: \"kubernetes.io/projected/1c4975c7-bc87-45a0-8bd9-878d3610ecc4-kube-api-access-dh6xm\") pod \"glance-db-create-4rgqh\" (UID: \"1c4975c7-bc87-45a0-8bd9-878d3610ecc4\") " pod="openstack/glance-db-create-4rgqh" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.516084 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh6xm\" (UniqueName: \"kubernetes.io/projected/1c4975c7-bc87-45a0-8bd9-878d3610ecc4-kube-api-access-dh6xm\") pod \"glance-db-create-4rgqh\" (UID: \"1c4975c7-bc87-45a0-8bd9-878d3610ecc4\") " pod="openstack/glance-db-create-4rgqh" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.543054 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5vd9f"] Oct 08 06:51:58 crc kubenswrapper[4958]: W1008 06:51:58.552119 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4afa2864_e8e7_4789_b6d1_e1608724bcff.slice/crio-ef5bc074b75b6374e13b50932a944f1f1a5e6f261354596bd312b0a0662b4581 WatchSource:0}: Error finding container ef5bc074b75b6374e13b50932a944f1f1a5e6f261354596bd312b0a0662b4581: Status 404 returned error can't find the container with id ef5bc074b75b6374e13b50932a944f1f1a5e6f261354596bd312b0a0662b4581 Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.581284 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rgqh" Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.736557 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z2vpl"] Oct 08 06:51:58 crc kubenswrapper[4958]: W1008 06:51:58.750172 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31307abc_8d4b_4e83_a624_1720b6b342c4.slice/crio-c6c569ddbc09eb63ad1d72937af63ccfe63e520ba7bb1bc0f13ca8d658a07a23 WatchSource:0}: Error finding container c6c569ddbc09eb63ad1d72937af63ccfe63e520ba7bb1bc0f13ca8d658a07a23: Status 404 returned error can't find the container with id c6c569ddbc09eb63ad1d72937af63ccfe63e520ba7bb1bc0f13ca8d658a07a23 Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.932259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z2vpl" event={"ID":"31307abc-8d4b-4e83-a624-1720b6b342c4","Type":"ContainerStarted","Data":"c6c569ddbc09eb63ad1d72937af63ccfe63e520ba7bb1bc0f13ca8d658a07a23"} Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.935009 4958 generic.go:334] "Generic (PLEG): container finished" podID="4afa2864-e8e7-4789-b6d1-e1608724bcff" containerID="0ccd56a1958ce0154664df950fcf2878b9a906bd10c575f921169414c7f923ef" exitCode=0 Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.935344 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5vd9f" event={"ID":"4afa2864-e8e7-4789-b6d1-e1608724bcff","Type":"ContainerDied","Data":"0ccd56a1958ce0154664df950fcf2878b9a906bd10c575f921169414c7f923ef"} Oct 08 06:51:58 crc kubenswrapper[4958]: I1008 06:51:58.935367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5vd9f" event={"ID":"4afa2864-e8e7-4789-b6d1-e1608724bcff","Type":"ContainerStarted","Data":"ef5bc074b75b6374e13b50932a944f1f1a5e6f261354596bd312b0a0662b4581"} Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.010404 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4rgqh"] Oct 08 06:51:59 crc kubenswrapper[4958]: W1008 06:51:59.079076 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c4975c7_bc87_45a0_8bd9_878d3610ecc4.slice/crio-b93b09499d65b6a2c2cb78b64cc716389d8ec0a4822b00074a06dd1562bc3c60 WatchSource:0}: Error finding container b93b09499d65b6a2c2cb78b64cc716389d8ec0a4822b00074a06dd1562bc3c60: Status 404 returned error can't find the container with id b93b09499d65b6a2c2cb78b64cc716389d8ec0a4822b00074a06dd1562bc3c60 Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.111877 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:51:59 crc kubenswrapper[4958]: E1008 06:51:59.112113 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 08 06:51:59 crc kubenswrapper[4958]: E1008 06:51:59.112132 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 08 06:51:59 crc kubenswrapper[4958]: E1008 06:51:59.112176 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift podName:6c45aa0e-9caf-42e6-bfbb-59c802d81c98 nodeName:}" failed. No retries permitted until 2025-10-08 06:52:07.112163681 +0000 UTC m=+1070.241856282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift") pod "swift-storage-0" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98") : configmap "swift-ring-files" not found Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.948875 4958 generic.go:334] "Generic (PLEG): container finished" podID="1c4975c7-bc87-45a0-8bd9-878d3610ecc4" containerID="ec623950d1af2bcac81bd3887205674263328f3f7bf9108b25835f232bfdd6aa" exitCode=0 Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.949094 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4rgqh" event={"ID":"1c4975c7-bc87-45a0-8bd9-878d3610ecc4","Type":"ContainerDied","Data":"ec623950d1af2bcac81bd3887205674263328f3f7bf9108b25835f232bfdd6aa"} Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.949842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4rgqh" event={"ID":"1c4975c7-bc87-45a0-8bd9-878d3610ecc4","Type":"ContainerStarted","Data":"b93b09499d65b6a2c2cb78b64cc716389d8ec0a4822b00074a06dd1562bc3c60"} Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.955840 4958 generic.go:334] "Generic (PLEG): container finished" podID="31307abc-8d4b-4e83-a624-1720b6b342c4" containerID="f2372075b46c0743cc70633e7c0ea3c807a4c5c06b80f014e70655df4dbd2021" exitCode=0 Oct 08 06:51:59 crc kubenswrapper[4958]: I1008 06:51:59.955910 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z2vpl" event={"ID":"31307abc-8d4b-4e83-a624-1720b6b342c4","Type":"ContainerDied","Data":"f2372075b46c0743cc70633e7c0ea3c807a4c5c06b80f014e70655df4dbd2021"} Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.345045 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.403177 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-68znj"] Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.403365 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" containerName="dnsmasq-dns" containerID="cri-o://224d2b087ec9c0e7b3c212dedf9c61134e51997b6183561462a6eca28b13ec5a" gracePeriod=10 Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.454272 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5vd9f" Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.553597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v7zd\" (UniqueName: \"kubernetes.io/projected/4afa2864-e8e7-4789-b6d1-e1608724bcff-kube-api-access-7v7zd\") pod \"4afa2864-e8e7-4789-b6d1-e1608724bcff\" (UID: \"4afa2864-e8e7-4789-b6d1-e1608724bcff\") " Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.581203 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afa2864-e8e7-4789-b6d1-e1608724bcff-kube-api-access-7v7zd" (OuterVolumeSpecName: "kube-api-access-7v7zd") pod "4afa2864-e8e7-4789-b6d1-e1608724bcff" (UID: "4afa2864-e8e7-4789-b6d1-e1608724bcff"). InnerVolumeSpecName "kube-api-access-7v7zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.655758 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v7zd\" (UniqueName: \"kubernetes.io/projected/4afa2864-e8e7-4789-b6d1-e1608724bcff-kube-api-access-7v7zd\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.990377 4958 generic.go:334] "Generic (PLEG): container finished" podID="45057cd4-4145-481a-ae2d-ebf2499db002" containerID="224d2b087ec9c0e7b3c212dedf9c61134e51997b6183561462a6eca28b13ec5a" exitCode=0 Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.991503 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" event={"ID":"45057cd4-4145-481a-ae2d-ebf2499db002","Type":"ContainerDied","Data":"224d2b087ec9c0e7b3c212dedf9c61134e51997b6183561462a6eca28b13ec5a"} Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.993892 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5vd9f" Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.996421 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5vd9f" event={"ID":"4afa2864-e8e7-4789-b6d1-e1608724bcff","Type":"ContainerDied","Data":"ef5bc074b75b6374e13b50932a944f1f1a5e6f261354596bd312b0a0662b4581"} Oct 08 06:52:00 crc kubenswrapper[4958]: I1008 06:52:00.996461 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5bc074b75b6374e13b50932a944f1f1a5e6f261354596bd312b0a0662b4581" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.378882 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z2vpl" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.454818 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rgqh" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.458432 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.494455 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9n5d\" (UniqueName: \"kubernetes.io/projected/31307abc-8d4b-4e83-a624-1720b6b342c4-kube-api-access-r9n5d\") pod \"31307abc-8d4b-4e83-a624-1720b6b342c4\" (UID: \"31307abc-8d4b-4e83-a624-1720b6b342c4\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.494769 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-nb\") pod \"45057cd4-4145-481a-ae2d-ebf2499db002\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.494844 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdhh8\" (UniqueName: \"kubernetes.io/projected/45057cd4-4145-481a-ae2d-ebf2499db002-kube-api-access-jdhh8\") pod \"45057cd4-4145-481a-ae2d-ebf2499db002\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.494929 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-dns-svc\") pod \"45057cd4-4145-481a-ae2d-ebf2499db002\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.495016 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh6xm\" (UniqueName: \"kubernetes.io/projected/1c4975c7-bc87-45a0-8bd9-878d3610ecc4-kube-api-access-dh6xm\") pod \"1c4975c7-bc87-45a0-8bd9-878d3610ecc4\" (UID: \"1c4975c7-bc87-45a0-8bd9-878d3610ecc4\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.495090 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-sb\") pod \"45057cd4-4145-481a-ae2d-ebf2499db002\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.495207 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-config\") pod \"45057cd4-4145-481a-ae2d-ebf2499db002\" (UID: \"45057cd4-4145-481a-ae2d-ebf2499db002\") " Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.516117 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31307abc-8d4b-4e83-a624-1720b6b342c4-kube-api-access-r9n5d" (OuterVolumeSpecName: "kube-api-access-r9n5d") pod "31307abc-8d4b-4e83-a624-1720b6b342c4" (UID: "31307abc-8d4b-4e83-a624-1720b6b342c4"). InnerVolumeSpecName "kube-api-access-r9n5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.520309 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4975c7-bc87-45a0-8bd9-878d3610ecc4-kube-api-access-dh6xm" (OuterVolumeSpecName: "kube-api-access-dh6xm") pod "1c4975c7-bc87-45a0-8bd9-878d3610ecc4" (UID: "1c4975c7-bc87-45a0-8bd9-878d3610ecc4"). InnerVolumeSpecName "kube-api-access-dh6xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.527927 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45057cd4-4145-481a-ae2d-ebf2499db002-kube-api-access-jdhh8" (OuterVolumeSpecName: "kube-api-access-jdhh8") pod "45057cd4-4145-481a-ae2d-ebf2499db002" (UID: "45057cd4-4145-481a-ae2d-ebf2499db002"). InnerVolumeSpecName "kube-api-access-jdhh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.553744 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45057cd4-4145-481a-ae2d-ebf2499db002" (UID: "45057cd4-4145-481a-ae2d-ebf2499db002"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.555824 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-config" (OuterVolumeSpecName: "config") pod "45057cd4-4145-481a-ae2d-ebf2499db002" (UID: "45057cd4-4145-481a-ae2d-ebf2499db002"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.560900 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45057cd4-4145-481a-ae2d-ebf2499db002" (UID: "45057cd4-4145-481a-ae2d-ebf2499db002"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.573720 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45057cd4-4145-481a-ae2d-ebf2499db002" (UID: "45057cd4-4145-481a-ae2d-ebf2499db002"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596712 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9n5d\" (UniqueName: \"kubernetes.io/projected/31307abc-8d4b-4e83-a624-1720b6b342c4-kube-api-access-r9n5d\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596740 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596749 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdhh8\" (UniqueName: \"kubernetes.io/projected/45057cd4-4145-481a-ae2d-ebf2499db002-kube-api-access-jdhh8\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596758 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596766 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh6xm\" (UniqueName: \"kubernetes.io/projected/1c4975c7-bc87-45a0-8bd9-878d3610ecc4-kube-api-access-dh6xm\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596775 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:01 crc kubenswrapper[4958]: I1008 06:52:01.596784 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45057cd4-4145-481a-ae2d-ebf2499db002-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.002917 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4rgqh" event={"ID":"1c4975c7-bc87-45a0-8bd9-878d3610ecc4","Type":"ContainerDied","Data":"b93b09499d65b6a2c2cb78b64cc716389d8ec0a4822b00074a06dd1562bc3c60"} Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.003884 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93b09499d65b6a2c2cb78b64cc716389d8ec0a4822b00074a06dd1562bc3c60" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.002935 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rgqh" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.004923 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.005001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-68znj" event={"ID":"45057cd4-4145-481a-ae2d-ebf2499db002","Type":"ContainerDied","Data":"d19fa9fcd750818ad45c5ebc5892435706287d18f21ffdd6f53b276bf5cfa2d8"} Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.005074 4958 scope.go:117] "RemoveContainer" containerID="224d2b087ec9c0e7b3c212dedf9c61134e51997b6183561462a6eca28b13ec5a" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.008034 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z2vpl" event={"ID":"31307abc-8d4b-4e83-a624-1720b6b342c4","Type":"ContainerDied","Data":"c6c569ddbc09eb63ad1d72937af63ccfe63e520ba7bb1bc0f13ca8d658a07a23"} Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.008090 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6c569ddbc09eb63ad1d72937af63ccfe63e520ba7bb1bc0f13ca8d658a07a23" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.008255 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z2vpl" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.032002 4958 scope.go:117] "RemoveContainer" containerID="ec4f294888d20ab5c74dd52f7c42baeb35ae80694217a6cd02f0c1e925dc3c59" Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.056601 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-68znj"] Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.069152 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-68znj"] Oct 08 06:52:02 crc kubenswrapper[4958]: I1008 06:52:02.359228 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 08 06:52:03 crc kubenswrapper[4958]: I1008 06:52:03.020138 4958 generic.go:334] "Generic (PLEG): container finished" podID="bfa59ca9-7979-4892-90d5-6b4f8b374583" containerID="1c5177a5e7f56e770d75063c504837fbc10992e43fba0322e87a21a8777d8212" exitCode=0 Oct 08 06:52:03 crc kubenswrapper[4958]: I1008 06:52:03.020253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42fpn" event={"ID":"bfa59ca9-7979-4892-90d5-6b4f8b374583","Type":"ContainerDied","Data":"1c5177a5e7f56e770d75063c504837fbc10992e43fba0322e87a21a8777d8212"} Oct 08 06:52:03 crc kubenswrapper[4958]: I1008 06:52:03.592854 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" path="/var/lib/kubelet/pods/45057cd4-4145-481a-ae2d-ebf2499db002/volumes" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.035982 4958 generic.go:334] "Generic (PLEG): container finished" podID="8f931d71-9f8f-4755-a793-ca326e423199" containerID="0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c" exitCode=0 Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.036182 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f931d71-9f8f-4755-a793-ca326e423199","Type":"ContainerDied","Data":"0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c"} Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.041525 4958 generic.go:334] "Generic (PLEG): container finished" podID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerID="5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971" exitCode=0 Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.041689 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"442b1534-27bc-4d6d-be46-1ea5689c290f","Type":"ContainerDied","Data":"5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971"} Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.460227 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.544255 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bfa59ca9-7979-4892-90d5-6b4f8b374583-etc-swift\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.544360 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-dispersionconf\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.544482 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-swiftconf\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.545130 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-scripts\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.545166 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfa59ca9-7979-4892-90d5-6b4f8b374583-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.545420 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-ring-data-devices\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.545454 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zqqf\" (UniqueName: \"kubernetes.io/projected/bfa59ca9-7979-4892-90d5-6b4f8b374583-kube-api-access-4zqqf\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.545508 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-combined-ca-bundle\") pod \"bfa59ca9-7979-4892-90d5-6b4f8b374583\" (UID: \"bfa59ca9-7979-4892-90d5-6b4f8b374583\") " Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.545796 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.546161 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bfa59ca9-7979-4892-90d5-6b4f8b374583-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.546181 4958 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.550290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa59ca9-7979-4892-90d5-6b4f8b374583-kube-api-access-4zqqf" (OuterVolumeSpecName: "kube-api-access-4zqqf") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "kube-api-access-4zqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.555679 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.568649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.571232 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-scripts" (OuterVolumeSpecName: "scripts") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.584877 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa59ca9-7979-4892-90d5-6b4f8b374583" (UID: "bfa59ca9-7979-4892-90d5-6b4f8b374583"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.647015 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa59ca9-7979-4892-90d5-6b4f8b374583-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.647041 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zqqf\" (UniqueName: \"kubernetes.io/projected/bfa59ca9-7979-4892-90d5-6b4f8b374583-kube-api-access-4zqqf\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.647051 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.647060 4958 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:04 crc kubenswrapper[4958]: I1008 06:52:04.647071 4958 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bfa59ca9-7979-4892-90d5-6b4f8b374583-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.052943 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f931d71-9f8f-4755-a793-ca326e423199","Type":"ContainerStarted","Data":"1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf"} Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.053447 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.055308 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42fpn" event={"ID":"bfa59ca9-7979-4892-90d5-6b4f8b374583","Type":"ContainerDied","Data":"d6ea36da72e91d937bdeba56ac28ef99a4c06761b6cbf4bae2a68e219877a888"} Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.055666 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ea36da72e91d937bdeba56ac28ef99a4c06761b6cbf4bae2a68e219877a888" Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.055745 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42fpn" Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.071430 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"442b1534-27bc-4d6d-be46-1ea5689c290f","Type":"ContainerStarted","Data":"a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328"} Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.071768 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.117499 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.601432672 podStartE2EDuration="53.117468249s" podCreationTimestamp="2025-10-08 06:51:12 +0000 UTC" firstStartedPulling="2025-10-08 06:51:18.819138911 +0000 UTC m=+1021.948831532" lastFinishedPulling="2025-10-08 06:51:29.335174478 +0000 UTC m=+1032.464867109" observedRunningTime="2025-10-08 06:52:05.109138994 +0000 UTC m=+1068.238831605" watchObservedRunningTime="2025-10-08 06:52:05.117468249 +0000 UTC m=+1068.247160860" Oct 08 06:52:05 crc kubenswrapper[4958]: I1008 06:52:05.153323 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.829143281 podStartE2EDuration="54.153307106s" podCreationTimestamp="2025-10-08 06:51:11 +0000 UTC" firstStartedPulling="2025-10-08 06:51:13.87715016 +0000 UTC m=+1017.006842751" lastFinishedPulling="2025-10-08 06:51:29.201313975 +0000 UTC m=+1032.331006576" observedRunningTime="2025-10-08 06:52:05.14307465 +0000 UTC m=+1068.272767261" watchObservedRunningTime="2025-10-08 06:52:05.153307106 +0000 UTC m=+1068.282999707" Oct 08 06:52:06 crc kubenswrapper[4958]: I1008 06:52:06.844681 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:52:06 crc kubenswrapper[4958]: I1008 06:52:06.844768 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:52:06 crc kubenswrapper[4958]: I1008 06:52:06.844830 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:52:06 crc kubenswrapper[4958]: I1008 06:52:06.845771 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"536ded4ebcb3bc7d24c7fc08780da096046aca9fd4294e702291ed048ad523b0"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:52:06 crc kubenswrapper[4958]: I1008 06:52:06.845878 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://536ded4ebcb3bc7d24c7fc08780da096046aca9fd4294e702291ed048ad523b0" gracePeriod=600 Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.112759 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="536ded4ebcb3bc7d24c7fc08780da096046aca9fd4294e702291ed048ad523b0" exitCode=0 Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.112830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"536ded4ebcb3bc7d24c7fc08780da096046aca9fd4294e702291ed048ad523b0"} Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.113186 4958 scope.go:117] "RemoveContainer" containerID="c231c00392b517e6cca660a4aec9d278a8fcf4be120c9a0359f28467806c28b5" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.205809 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.214686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"swift-storage-0\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " pod="openstack/swift-storage-0" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.497024 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815164 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0e08-account-create-c2f62"] Oct 08 06:52:07 crc kubenswrapper[4958]: E1008 06:52:07.815788 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa59ca9-7979-4892-90d5-6b4f8b374583" containerName="swift-ring-rebalance" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815805 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa59ca9-7979-4892-90d5-6b4f8b374583" containerName="swift-ring-rebalance" Oct 08 06:52:07 crc kubenswrapper[4958]: E1008 06:52:07.815814 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4975c7-bc87-45a0-8bd9-878d3610ecc4" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815821 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4975c7-bc87-45a0-8bd9-878d3610ecc4" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: E1008 06:52:07.815838 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" containerName="init" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815844 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" containerName="init" Oct 08 06:52:07 crc kubenswrapper[4958]: E1008 06:52:07.815855 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31307abc-8d4b-4e83-a624-1720b6b342c4" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815861 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="31307abc-8d4b-4e83-a624-1720b6b342c4" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: E1008 06:52:07.815870 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" containerName="dnsmasq-dns" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815876 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" containerName="dnsmasq-dns" Oct 08 06:52:07 crc kubenswrapper[4958]: E1008 06:52:07.815885 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afa2864-e8e7-4789-b6d1-e1608724bcff" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.815890 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afa2864-e8e7-4789-b6d1-e1608724bcff" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.816044 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa59ca9-7979-4892-90d5-6b4f8b374583" containerName="swift-ring-rebalance" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.816060 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4975c7-bc87-45a0-8bd9-878d3610ecc4" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.816068 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="45057cd4-4145-481a-ae2d-ebf2499db002" containerName="dnsmasq-dns" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.816077 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afa2864-e8e7-4789-b6d1-e1608724bcff" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.816088 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="31307abc-8d4b-4e83-a624-1720b6b342c4" containerName="mariadb-database-create" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.816572 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.822133 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.826018 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e08-account-create-c2f62"] Oct 08 06:52:07 crc kubenswrapper[4958]: I1008 06:52:07.916752 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll9b\" (UniqueName: \"kubernetes.io/projected/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb-kube-api-access-tll9b\") pod \"keystone-0e08-account-create-c2f62\" (UID: \"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb\") " pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.019251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll9b\" (UniqueName: \"kubernetes.io/projected/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb-kube-api-access-tll9b\") pod \"keystone-0e08-account-create-c2f62\" (UID: \"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb\") " pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.044589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll9b\" (UniqueName: \"kubernetes.io/projected/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb-kube-api-access-tll9b\") pod \"keystone-0e08-account-create-c2f62\" (UID: \"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb\") " pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.062441 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 08 06:52:08 crc kubenswrapper[4958]: W1008 06:52:08.068989 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c45aa0e_9caf_42e6_bfbb_59c802d81c98.slice/crio-df3e5ea7795d70faac233c1f9b362c79016f98a7f5d2f57257c59a331f30c59d WatchSource:0}: Error finding container df3e5ea7795d70faac233c1f9b362c79016f98a7f5d2f57257c59a331f30c59d: Status 404 returned error can't find the container with id df3e5ea7795d70faac233c1f9b362c79016f98a7f5d2f57257c59a331f30c59d Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.132282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"df3e5ea7795d70faac233c1f9b362c79016f98a7f5d2f57257c59a331f30c59d"} Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.140493 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cbd1-account-create-4rfv9"] Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.141748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.143546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.150094 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cbd1-account-create-4rfv9"] Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.151876 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"5b130d69a8aa9d1feb209f9a1f16b0b50db741e0776429ac28dc669dd948e901"} Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.178037 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.231193 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c-kube-api-access-z2pl2\") pod \"placement-cbd1-account-create-4rfv9\" (UID: \"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c\") " pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.321404 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7885-account-create-x4rcb"] Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.322929 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.324772 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.337214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c-kube-api-access-z2pl2\") pod \"placement-cbd1-account-create-4rfv9\" (UID: \"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c\") " pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.344241 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7885-account-create-x4rcb"] Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.376351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c-kube-api-access-z2pl2\") pod \"placement-cbd1-account-create-4rfv9\" (UID: \"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c\") " pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.439139 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxwh\" (UniqueName: \"kubernetes.io/projected/a669302e-8748-424e-9be1-66e595fb1dd1-kube-api-access-nqxwh\") pod \"glance-7885-account-create-x4rcb\" (UID: \"a669302e-8748-424e-9be1-66e595fb1dd1\") " pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.458857 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e08-account-create-c2f62"] Oct 08 06:52:08 crc kubenswrapper[4958]: W1008 06:52:08.464614 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7107907_afbb_4fe5_86d0_a6b7bd31f0eb.slice/crio-e802b12641749f3658f752b9023c0b00a9a06f7a096bd9aad89c2a8c60c9005a WatchSource:0}: Error finding container e802b12641749f3658f752b9023c0b00a9a06f7a096bd9aad89c2a8c60c9005a: Status 404 returned error can't find the container with id e802b12641749f3658f752b9023c0b00a9a06f7a096bd9aad89c2a8c60c9005a Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.477687 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.541116 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxwh\" (UniqueName: \"kubernetes.io/projected/a669302e-8748-424e-9be1-66e595fb1dd1-kube-api-access-nqxwh\") pod \"glance-7885-account-create-x4rcb\" (UID: \"a669302e-8748-424e-9be1-66e595fb1dd1\") " pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.563859 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxwh\" (UniqueName: \"kubernetes.io/projected/a669302e-8748-424e-9be1-66e595fb1dd1-kube-api-access-nqxwh\") pod \"glance-7885-account-create-x4rcb\" (UID: \"a669302e-8748-424e-9be1-66e595fb1dd1\") " pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.649413 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:08 crc kubenswrapper[4958]: I1008 06:52:08.775280 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cbd1-account-create-4rfv9"] Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.134190 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7885-account-create-x4rcb"] Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.167892 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c" containerID="dd7ce193b3de687f137afb222a80107abe63702c573cafc0830aa7ef4d6a47c7" exitCode=0 Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.168011 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbd1-account-create-4rfv9" event={"ID":"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c","Type":"ContainerDied","Data":"dd7ce193b3de687f137afb222a80107abe63702c573cafc0830aa7ef4d6a47c7"} Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.168071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbd1-account-create-4rfv9" event={"ID":"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c","Type":"ContainerStarted","Data":"8b92f9f89e42b07ea0970fdc435effbc7ff1ba86dccd04e8b7b60fe3991ce020"} Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.170729 4958 generic.go:334] "Generic (PLEG): container finished" podID="a7107907-afbb-4fe5-86d0-a6b7bd31f0eb" containerID="09a097e49f1b05aef7bc207ccb6b6a019ea4d56e6334957284e3d1938ade8820" exitCode=0 Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.170807 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e08-account-create-c2f62" event={"ID":"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb","Type":"ContainerDied","Data":"09a097e49f1b05aef7bc207ccb6b6a019ea4d56e6334957284e3d1938ade8820"} Oct 08 06:52:09 crc kubenswrapper[4958]: I1008 06:52:09.170837 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e08-account-create-c2f62" event={"ID":"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb","Type":"ContainerStarted","Data":"e802b12641749f3658f752b9023c0b00a9a06f7a096bd9aad89c2a8c60c9005a"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.186973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"c13727bb3092a9324ff420e4b08230ab8156bbd4c580c0f6905a950356a2b45b"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.187313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"f0de83255ba3108743dca08b70ec1668b51eb795dc13a54cf5cbf4041775bd0b"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.187326 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"4afa360de4cdd22485317e27cb843f86474d888319ad6c43a75e14506d8d3331"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.187337 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"240aa008ea9290a60c7264968b55d9ba55c52133e52044e5dfbe81d67bf1b1b8"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.188424 4958 generic.go:334] "Generic (PLEG): container finished" podID="a669302e-8748-424e-9be1-66e595fb1dd1" containerID="4f1396c7a4ac66765f9854b77cee40014b836f5ef211b398af4a422dace45ab9" exitCode=0 Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.188516 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7885-account-create-x4rcb" event={"ID":"a669302e-8748-424e-9be1-66e595fb1dd1","Type":"ContainerDied","Data":"4f1396c7a4ac66765f9854b77cee40014b836f5ef211b398af4a422dace45ab9"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.188541 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7885-account-create-x4rcb" event={"ID":"a669302e-8748-424e-9be1-66e595fb1dd1","Type":"ContainerStarted","Data":"be044ef95ee4a596753b27b6e0294c8a47b1f4a84f53f0385c7bed312751da29"} Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.574724 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.651935 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.687129 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tll9b\" (UniqueName: \"kubernetes.io/projected/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb-kube-api-access-tll9b\") pod \"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb\" (UID: \"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb\") " Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.697188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb-kube-api-access-tll9b" (OuterVolumeSpecName: "kube-api-access-tll9b") pod "a7107907-afbb-4fe5-86d0-a6b7bd31f0eb" (UID: "a7107907-afbb-4fe5-86d0-a6b7bd31f0eb"). InnerVolumeSpecName "kube-api-access-tll9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.788703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c-kube-api-access-z2pl2\") pod \"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c\" (UID: \"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c\") " Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.789143 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tll9b\" (UniqueName: \"kubernetes.io/projected/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb-kube-api-access-tll9b\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.792383 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c-kube-api-access-z2pl2" (OuterVolumeSpecName: "kube-api-access-z2pl2") pod "2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c" (UID: "2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c"). InnerVolumeSpecName "kube-api-access-z2pl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:10 crc kubenswrapper[4958]: I1008 06:52:10.890816 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c-kube-api-access-z2pl2\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.200780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cbd1-account-create-4rfv9" event={"ID":"2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c","Type":"ContainerDied","Data":"8b92f9f89e42b07ea0970fdc435effbc7ff1ba86dccd04e8b7b60fe3991ce020"} Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.201120 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b92f9f89e42b07ea0970fdc435effbc7ff1ba86dccd04e8b7b60fe3991ce020" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.201138 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cbd1-account-create-4rfv9" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.204943 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e08-account-create-c2f62" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.213602 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e08-account-create-c2f62" event={"ID":"a7107907-afbb-4fe5-86d0-a6b7bd31f0eb","Type":"ContainerDied","Data":"e802b12641749f3658f752b9023c0b00a9a06f7a096bd9aad89c2a8c60c9005a"} Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.213642 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e802b12641749f3658f752b9023c0b00a9a06f7a096bd9aad89c2a8c60c9005a" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.603062 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.702545 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxwh\" (UniqueName: \"kubernetes.io/projected/a669302e-8748-424e-9be1-66e595fb1dd1-kube-api-access-nqxwh\") pod \"a669302e-8748-424e-9be1-66e595fb1dd1\" (UID: \"a669302e-8748-424e-9be1-66e595fb1dd1\") " Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.725380 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a669302e-8748-424e-9be1-66e595fb1dd1-kube-api-access-nqxwh" (OuterVolumeSpecName: "kube-api-access-nqxwh") pod "a669302e-8748-424e-9be1-66e595fb1dd1" (UID: "a669302e-8748-424e-9be1-66e595fb1dd1"). InnerVolumeSpecName "kube-api-access-nqxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:11 crc kubenswrapper[4958]: I1008 06:52:11.804874 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxwh\" (UniqueName: \"kubernetes.io/projected/a669302e-8748-424e-9be1-66e595fb1dd1-kube-api-access-nqxwh\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:12 crc kubenswrapper[4958]: I1008 06:52:12.227002 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"66d5f937dbd1abd5a08b1b2ddf3cc40f3eb8fc2793d2eff9531136500ae61e84"} Oct 08 06:52:12 crc kubenswrapper[4958]: I1008 06:52:12.227531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"0a5a60f42124a354930e3b3306d5ebb23d9d56694f6a482b36b814b7a29405e6"} Oct 08 06:52:12 crc kubenswrapper[4958]: I1008 06:52:12.230164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7885-account-create-x4rcb" event={"ID":"a669302e-8748-424e-9be1-66e595fb1dd1","Type":"ContainerDied","Data":"be044ef95ee4a596753b27b6e0294c8a47b1f4a84f53f0385c7bed312751da29"} Oct 08 06:52:12 crc kubenswrapper[4958]: I1008 06:52:12.230197 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be044ef95ee4a596753b27b6e0294c8a47b1f4a84f53f0385c7bed312751da29" Oct 08 06:52:12 crc kubenswrapper[4958]: I1008 06:52:12.230263 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7885-account-create-x4rcb" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.249816 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"ef947aa4d03942a34c6bc85ef6fb4b0f2207baa004e3f6b80e29968a08b1d8cb"} Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.249866 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"f6755a074a4c40620709a87ac6599a018ae64733e179e086ef57f6d9ce9dc4d0"} Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.483397 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-pjrq5"] Oct 08 06:52:13 crc kubenswrapper[4958]: E1008 06:52:13.484084 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7107907-afbb-4fe5-86d0-a6b7bd31f0eb" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.484105 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7107907-afbb-4fe5-86d0-a6b7bd31f0eb" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: E1008 06:52:13.484116 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.484126 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: E1008 06:52:13.484141 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a669302e-8748-424e-9be1-66e595fb1dd1" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.484151 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a669302e-8748-424e-9be1-66e595fb1dd1" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.484357 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.484383 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7107907-afbb-4fe5-86d0-a6b7bd31f0eb" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.484410 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a669302e-8748-424e-9be1-66e595fb1dd1" containerName="mariadb-account-create" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.485052 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.486829 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.487094 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cqgjb" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.490846 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-pjrq5"] Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.635198 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-db-sync-config-data\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.635259 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxrl7\" (UniqueName: \"kubernetes.io/projected/f98db36e-5db6-40ec-a540-3b50b4ae0749-kube-api-access-sxrl7\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.635287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-config-data\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.635373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-combined-ca-bundle\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.737804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-config-data\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.737935 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-combined-ca-bundle\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.738140 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-db-sync-config-data\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.738723 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxrl7\" (UniqueName: \"kubernetes.io/projected/f98db36e-5db6-40ec-a540-3b50b4ae0749-kube-api-access-sxrl7\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.750935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-db-sync-config-data\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.750964 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-config-data\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.750972 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-combined-ca-bundle\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.755427 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxrl7\" (UniqueName: \"kubernetes.io/projected/f98db36e-5db6-40ec-a540-3b50b4ae0749-kube-api-access-sxrl7\") pod \"glance-db-sync-pjrq5\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:13 crc kubenswrapper[4958]: I1008 06:52:13.805979 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.088188 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.424873 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-th7cg"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.425873 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.436652 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-th7cg"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.536277 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8wzdh"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.537424 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.554841 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt7l2\" (UniqueName: \"kubernetes.io/projected/43525d65-17da-4bd4-a6bd-c21781daf8ff-kube-api-access-gt7l2\") pod \"cinder-db-create-th7cg\" (UID: \"43525d65-17da-4bd4-a6bd-c21781daf8ff\") " pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.566626 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.609367 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8wzdh"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.656569 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt7l2\" (UniqueName: \"kubernetes.io/projected/43525d65-17da-4bd4-a6bd-c21781daf8ff-kube-api-access-gt7l2\") pod \"cinder-db-create-th7cg\" (UID: \"43525d65-17da-4bd4-a6bd-c21781daf8ff\") " pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.656703 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rzz\" (UniqueName: \"kubernetes.io/projected/85509aa5-d56b-4bd9-bcd0-9570927a885d-kube-api-access-g9rzz\") pod \"barbican-db-create-8wzdh\" (UID: \"85509aa5-d56b-4bd9-bcd0-9570927a885d\") " pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.694867 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt7l2\" (UniqueName: \"kubernetes.io/projected/43525d65-17da-4bd4-a6bd-c21781daf8ff-kube-api-access-gt7l2\") pod \"cinder-db-create-th7cg\" (UID: \"43525d65-17da-4bd4-a6bd-c21781daf8ff\") " pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.715145 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-spvp9" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerName="ovn-controller" probeResult="failure" output=< Oct 08 06:52:14 crc kubenswrapper[4958]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 08 06:52:14 crc kubenswrapper[4958]: > Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.756927 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.757933 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9rzz\" (UniqueName: \"kubernetes.io/projected/85509aa5-d56b-4bd9-bcd0-9570927a885d-kube-api-access-g9rzz\") pod \"barbican-db-create-8wzdh\" (UID: \"85509aa5-d56b-4bd9-bcd0-9570927a885d\") " pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.760336 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.784155 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9rzz\" (UniqueName: \"kubernetes.io/projected/85509aa5-d56b-4bd9-bcd0-9570927a885d-kube-api-access-g9rzz\") pod \"barbican-db-create-8wzdh\" (UID: \"85509aa5-d56b-4bd9-bcd0-9570927a885d\") " pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.819833 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-txddw"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.820785 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-txddw" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.840276 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-txddw"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.857791 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.899238 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-f72ls"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.900474 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.902720 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.904366 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fhwhd" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.904514 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.909533 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.916031 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f72ls"] Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.960245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-combined-ca-bundle\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.960311 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-config-data\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.960336 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whffz\" (UniqueName: \"kubernetes.io/projected/3135b952-56ad-4f48-a839-632adc6b8856-kube-api-access-whffz\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:14 crc kubenswrapper[4958]: I1008 06:52:14.960391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhmx\" (UniqueName: \"kubernetes.io/projected/2f5576e3-fbd8-493d-b8b5-14417b04caf4-kube-api-access-flhmx\") pod \"neutron-db-create-txddw\" (UID: \"2f5576e3-fbd8-493d-b8b5-14417b04caf4\") " pod="openstack/neutron-db-create-txddw" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.049277 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-spvp9-config-fkpvc"] Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.051103 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.053334 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.062142 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spvp9-config-fkpvc"] Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.062178 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-combined-ca-bundle\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.062223 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-config-data\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.062245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whffz\" (UniqueName: \"kubernetes.io/projected/3135b952-56ad-4f48-a839-632adc6b8856-kube-api-access-whffz\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.062282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flhmx\" (UniqueName: \"kubernetes.io/projected/2f5576e3-fbd8-493d-b8b5-14417b04caf4-kube-api-access-flhmx\") pod \"neutron-db-create-txddw\" (UID: \"2f5576e3-fbd8-493d-b8b5-14417b04caf4\") " pod="openstack/neutron-db-create-txddw" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.066512 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-combined-ca-bundle\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.073910 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-config-data\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.085730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whffz\" (UniqueName: \"kubernetes.io/projected/3135b952-56ad-4f48-a839-632adc6b8856-kube-api-access-whffz\") pod \"keystone-db-sync-f72ls\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.100439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flhmx\" (UniqueName: \"kubernetes.io/projected/2f5576e3-fbd8-493d-b8b5-14417b04caf4-kube-api-access-flhmx\") pod \"neutron-db-create-txddw\" (UID: \"2f5576e3-fbd8-493d-b8b5-14417b04caf4\") " pod="openstack/neutron-db-create-txddw" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.164165 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-txddw" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.164634 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-log-ovn\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.164704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-scripts\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.164774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-additional-scripts\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.164955 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.165100 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89n9q\" (UniqueName: \"kubernetes.io/projected/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-kube-api-access-89n9q\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.165146 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run-ovn\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-log-ovn\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266544 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-scripts\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266580 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-additional-scripts\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266620 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266676 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89n9q\" (UniqueName: \"kubernetes.io/projected/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-kube-api-access-89n9q\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266696 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run-ovn\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266856 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-log-ovn\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run-ovn\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.266979 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.270609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-scripts\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.271060 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-additional-scripts\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.280436 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.290786 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89n9q\" (UniqueName: \"kubernetes.io/projected/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-kube-api-access-89n9q\") pod \"ovn-controller-spvp9-config-fkpvc\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.291863 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"86bfed71cefd8eec097df2c1d8e6ba77ccea56ba62521cec70c88418d4a40639"} Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.291908 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"724ed394bbd6e962a7c26ddc942e11038dcd01c833577f0f40fabd8ae1c82655"} Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.363626 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-th7cg"] Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.442672 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.450562 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-pjrq5"] Oct 08 06:52:15 crc kubenswrapper[4958]: I1008 06:52:15.474995 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8wzdh"] Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:15.505901 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-txddw"] Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.298419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-th7cg" event={"ID":"43525d65-17da-4bd4-a6bd-c21781daf8ff","Type":"ContainerStarted","Data":"1b5a73ff987b9a0e1dcfa0504c77a4586a7bcf1a30cf08135d1117c4cb4997e8"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.298718 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-th7cg" event={"ID":"43525d65-17da-4bd4-a6bd-c21781daf8ff","Type":"ContainerStarted","Data":"6ec46c6437b66e91aa3ba0d2d216412e657f8dafe4eb72a8a75692d32809f627"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.300855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-txddw" event={"ID":"2f5576e3-fbd8-493d-b8b5-14417b04caf4","Type":"ContainerStarted","Data":"a3a078c249e2a4ee019144d666f75b0cec5561a5446b80136b45678bc868d40f"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.300875 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-txddw" event={"ID":"2f5576e3-fbd8-493d-b8b5-14417b04caf4","Type":"ContainerStarted","Data":"9a3b1eced89158fc36a72509f02ff9910b818dd16392dd0d1da7bfb6dfa1f54f"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.307121 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8wzdh" event={"ID":"85509aa5-d56b-4bd9-bcd0-9570927a885d","Type":"ContainerStarted","Data":"748589410a41fce79276e353489590fafe207883c70c06e791605b61ddfc888d"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.307159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8wzdh" event={"ID":"85509aa5-d56b-4bd9-bcd0-9570927a885d","Type":"ContainerStarted","Data":"c9b2bfbdf92eea5d979671bf5946e45c43018ead34e0c49127eb5d269d60900e"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.317001 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-th7cg" podStartSLOduration=2.316983183 podStartE2EDuration="2.316983183s" podCreationTimestamp="2025-10-08 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:16.316300254 +0000 UTC m=+1079.445992855" watchObservedRunningTime="2025-10-08 06:52:16.316983183 +0000 UTC m=+1079.446675774" Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.322566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"96600bb67add0f430f3d3f7bc50bb148f53a95e2d96aa280884db2079910fcc5"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.322614 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"822bb196cd354e0752c30e33f034860b8aa7c4cf0eef390c7e6067b9260d966e"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.322624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"21df3155fa52a2087045a5927cde073b2ca8d2f30fa88ba4c74a281b2f3fb7bc"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.322633 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"b608a4dbe063d74c26342e7c42bce09e30df95c744884718ee615000db2bfb36"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.323637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pjrq5" event={"ID":"f98db36e-5db6-40ec-a540-3b50b4ae0749","Type":"ContainerStarted","Data":"8ea5d25eeb8872d001621ff2b19c79a1378df28ebe72545b042e57874346a946"} Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.332057 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-txddw" podStartSLOduration=2.332037529 podStartE2EDuration="2.332037529s" podCreationTimestamp="2025-10-08 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:16.329179752 +0000 UTC m=+1079.458872353" watchObservedRunningTime="2025-10-08 06:52:16.332037529 +0000 UTC m=+1079.461730130" Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.623685 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8wzdh" podStartSLOduration=2.62366732 podStartE2EDuration="2.62366732s" podCreationTimestamp="2025-10-08 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:16.354067534 +0000 UTC m=+1079.483760155" watchObservedRunningTime="2025-10-08 06:52:16.62366732 +0000 UTC m=+1079.753359921" Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.629266 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f72ls"] Oct 08 06:52:16 crc kubenswrapper[4958]: W1008 06:52:16.644099 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3135b952_56ad_4f48_a839_632adc6b8856.slice/crio-f5777e3369cf2eef1f6d5288d170dc9cd96babc77c91e81bee760559a320ffb3 WatchSource:0}: Error finding container f5777e3369cf2eef1f6d5288d170dc9cd96babc77c91e81bee760559a320ffb3: Status 404 returned error can't find the container with id f5777e3369cf2eef1f6d5288d170dc9cd96babc77c91e81bee760559a320ffb3 Oct 08 06:52:16 crc kubenswrapper[4958]: I1008 06:52:16.770406 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-spvp9-config-fkpvc"] Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.341650 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerStarted","Data":"1fcb4d69732f8668a00f6659ff61c3c3e89fb140ae20ef55148d18dda7b7d854"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.344122 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9-config-fkpvc" event={"ID":"3e8f3f19-b063-45a3-806d-1f867cb5c0d8","Type":"ContainerStarted","Data":"3646ca521aff130be148cea07cbf7a3bcb5ac1b446d8d33e3d9792d31ce394a1"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.344162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9-config-fkpvc" event={"ID":"3e8f3f19-b063-45a3-806d-1f867cb5c0d8","Type":"ContainerStarted","Data":"675122eeb7eb2e498f90c08052c099ac62b28df2a090f88066df2290ff81a468"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.346632 4958 generic.go:334] "Generic (PLEG): container finished" podID="43525d65-17da-4bd4-a6bd-c21781daf8ff" containerID="1b5a73ff987b9a0e1dcfa0504c77a4586a7bcf1a30cf08135d1117c4cb4997e8" exitCode=0 Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.346722 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-th7cg" event={"ID":"43525d65-17da-4bd4-a6bd-c21781daf8ff","Type":"ContainerDied","Data":"1b5a73ff987b9a0e1dcfa0504c77a4586a7bcf1a30cf08135d1117c4cb4997e8"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.349013 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f72ls" event={"ID":"3135b952-56ad-4f48-a839-632adc6b8856","Type":"ContainerStarted","Data":"f5777e3369cf2eef1f6d5288d170dc9cd96babc77c91e81bee760559a320ffb3"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.351423 4958 generic.go:334] "Generic (PLEG): container finished" podID="2f5576e3-fbd8-493d-b8b5-14417b04caf4" containerID="a3a078c249e2a4ee019144d666f75b0cec5561a5446b80136b45678bc868d40f" exitCode=0 Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.351481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-txddw" event={"ID":"2f5576e3-fbd8-493d-b8b5-14417b04caf4","Type":"ContainerDied","Data":"a3a078c249e2a4ee019144d666f75b0cec5561a5446b80136b45678bc868d40f"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.353360 4958 generic.go:334] "Generic (PLEG): container finished" podID="85509aa5-d56b-4bd9-bcd0-9570927a885d" containerID="748589410a41fce79276e353489590fafe207883c70c06e791605b61ddfc888d" exitCode=0 Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.353386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8wzdh" event={"ID":"85509aa5-d56b-4bd9-bcd0-9570927a885d","Type":"ContainerDied","Data":"748589410a41fce79276e353489590fafe207883c70c06e791605b61ddfc888d"} Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.372442 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.70945682 podStartE2EDuration="27.372424107s" podCreationTimestamp="2025-10-08 06:51:50 +0000 UTC" firstStartedPulling="2025-10-08 06:52:08.07244368 +0000 UTC m=+1071.202136301" lastFinishedPulling="2025-10-08 06:52:14.735410987 +0000 UTC m=+1077.865103588" observedRunningTime="2025-10-08 06:52:17.37066646 +0000 UTC m=+1080.500359061" watchObservedRunningTime="2025-10-08 06:52:17.372424107 +0000 UTC m=+1080.502116708" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.412392 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-spvp9-config-fkpvc" podStartSLOduration=2.412372025 podStartE2EDuration="2.412372025s" podCreationTimestamp="2025-10-08 06:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:17.41107545 +0000 UTC m=+1080.540768051" watchObservedRunningTime="2025-10-08 06:52:17.412372025 +0000 UTC m=+1080.542064626" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.640798 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-8fq9b"] Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.642931 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.644386 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.656501 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-8fq9b"] Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.713092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.713229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.713299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-svc\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.713349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-config\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.713409 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.713475 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgn5\" (UniqueName: \"kubernetes.io/projected/0c2291ff-e9af-4016-a519-1d8251ffa975-kube-api-access-nfgn5\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.815217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.815394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.815428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-svc\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.815462 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.815487 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-config\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.815522 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgn5\" (UniqueName: \"kubernetes.io/projected/0c2291ff-e9af-4016-a519-1d8251ffa975-kube-api-access-nfgn5\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.816559 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-swift-storage-0\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.816622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-sb\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.818781 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-svc\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.818812 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-config\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.818863 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-nb\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.846920 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgn5\" (UniqueName: \"kubernetes.io/projected/0c2291ff-e9af-4016-a519-1d8251ffa975-kube-api-access-nfgn5\") pod \"dnsmasq-dns-564965cbfc-8fq9b\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:17 crc kubenswrapper[4958]: I1008 06:52:17.997818 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.364894 4958 generic.go:334] "Generic (PLEG): container finished" podID="3e8f3f19-b063-45a3-806d-1f867cb5c0d8" containerID="3646ca521aff130be148cea07cbf7a3bcb5ac1b446d8d33e3d9792d31ce394a1" exitCode=0 Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.365094 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9-config-fkpvc" event={"ID":"3e8f3f19-b063-45a3-806d-1f867cb5c0d8","Type":"ContainerDied","Data":"3646ca521aff130be148cea07cbf7a3bcb5ac1b446d8d33e3d9792d31ce394a1"} Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.417093 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-8fq9b"] Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.655500 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.758276 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-txddw" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.762799 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.852567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt7l2\" (UniqueName: \"kubernetes.io/projected/43525d65-17da-4bd4-a6bd-c21781daf8ff-kube-api-access-gt7l2\") pod \"43525d65-17da-4bd4-a6bd-c21781daf8ff\" (UID: \"43525d65-17da-4bd4-a6bd-c21781daf8ff\") " Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.860395 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43525d65-17da-4bd4-a6bd-c21781daf8ff-kube-api-access-gt7l2" (OuterVolumeSpecName: "kube-api-access-gt7l2") pod "43525d65-17da-4bd4-a6bd-c21781daf8ff" (UID: "43525d65-17da-4bd4-a6bd-c21781daf8ff"). InnerVolumeSpecName "kube-api-access-gt7l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.954865 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flhmx\" (UniqueName: \"kubernetes.io/projected/2f5576e3-fbd8-493d-b8b5-14417b04caf4-kube-api-access-flhmx\") pod \"2f5576e3-fbd8-493d-b8b5-14417b04caf4\" (UID: \"2f5576e3-fbd8-493d-b8b5-14417b04caf4\") " Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.955773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9rzz\" (UniqueName: \"kubernetes.io/projected/85509aa5-d56b-4bd9-bcd0-9570927a885d-kube-api-access-g9rzz\") pod \"85509aa5-d56b-4bd9-bcd0-9570927a885d\" (UID: \"85509aa5-d56b-4bd9-bcd0-9570927a885d\") " Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.956355 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt7l2\" (UniqueName: \"kubernetes.io/projected/43525d65-17da-4bd4-a6bd-c21781daf8ff-kube-api-access-gt7l2\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.959512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85509aa5-d56b-4bd9-bcd0-9570927a885d-kube-api-access-g9rzz" (OuterVolumeSpecName: "kube-api-access-g9rzz") pod "85509aa5-d56b-4bd9-bcd0-9570927a885d" (UID: "85509aa5-d56b-4bd9-bcd0-9570927a885d"). InnerVolumeSpecName "kube-api-access-g9rzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:18 crc kubenswrapper[4958]: I1008 06:52:18.960082 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5576e3-fbd8-493d-b8b5-14417b04caf4-kube-api-access-flhmx" (OuterVolumeSpecName: "kube-api-access-flhmx") pod "2f5576e3-fbd8-493d-b8b5-14417b04caf4" (UID: "2f5576e3-fbd8-493d-b8b5-14417b04caf4"). InnerVolumeSpecName "kube-api-access-flhmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.057181 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9rzz\" (UniqueName: \"kubernetes.io/projected/85509aa5-d56b-4bd9-bcd0-9570927a885d-kube-api-access-g9rzz\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.057209 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flhmx\" (UniqueName: \"kubernetes.io/projected/2f5576e3-fbd8-493d-b8b5-14417b04caf4-kube-api-access-flhmx\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.382202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-txddw" event={"ID":"2f5576e3-fbd8-493d-b8b5-14417b04caf4","Type":"ContainerDied","Data":"9a3b1eced89158fc36a72509f02ff9910b818dd16392dd0d1da7bfb6dfa1f54f"} Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.382239 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3b1eced89158fc36a72509f02ff9910b818dd16392dd0d1da7bfb6dfa1f54f" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.382292 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-txddw" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.384764 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8wzdh" event={"ID":"85509aa5-d56b-4bd9-bcd0-9570927a885d","Type":"ContainerDied","Data":"c9b2bfbdf92eea5d979671bf5946e45c43018ead34e0c49127eb5d269d60900e"} Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.384805 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b2bfbdf92eea5d979671bf5946e45c43018ead34e0c49127eb5d269d60900e" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.384783 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8wzdh" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.386149 4958 generic.go:334] "Generic (PLEG): container finished" podID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerID="d2b992a17d90846839e6e707ee029a59a8ab08f95683d9d6e29937a29ffe21a0" exitCode=0 Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.386213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" event={"ID":"0c2291ff-e9af-4016-a519-1d8251ffa975","Type":"ContainerDied","Data":"d2b992a17d90846839e6e707ee029a59a8ab08f95683d9d6e29937a29ffe21a0"} Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.386238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" event={"ID":"0c2291ff-e9af-4016-a519-1d8251ffa975","Type":"ContainerStarted","Data":"4bb99bc17e1f37ff2784de371b08918d6df75b82540263c5361f4a7bb437a2f8"} Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.389899 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-th7cg" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.390001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-th7cg" event={"ID":"43525d65-17da-4bd4-a6bd-c21781daf8ff","Type":"ContainerDied","Data":"6ec46c6437b66e91aa3ba0d2d216412e657f8dafe4eb72a8a75692d32809f627"} Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.390708 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec46c6437b66e91aa3ba0d2d216412e657f8dafe4eb72a8a75692d32809f627" Oct 08 06:52:19 crc kubenswrapper[4958]: I1008 06:52:19.513507 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-spvp9" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.052351 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209532 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-additional-scripts\") pod \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209572 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-log-ovn\") pod \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-scripts\") pod \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209660 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89n9q\" (UniqueName: \"kubernetes.io/projected/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-kube-api-access-89n9q\") pod \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209684 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run-ovn\") pod \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209709 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3e8f3f19-b063-45a3-806d-1f867cb5c0d8" (UID: "3e8f3f19-b063-45a3-806d-1f867cb5c0d8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.209767 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run\") pod \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\" (UID: \"3e8f3f19-b063-45a3-806d-1f867cb5c0d8\") " Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.210078 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3e8f3f19-b063-45a3-806d-1f867cb5c0d8" (UID: "3e8f3f19-b063-45a3-806d-1f867cb5c0d8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.210089 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.210110 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run" (OuterVolumeSpecName: "var-run") pod "3e8f3f19-b063-45a3-806d-1f867cb5c0d8" (UID: "3e8f3f19-b063-45a3-806d-1f867cb5c0d8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.210667 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3e8f3f19-b063-45a3-806d-1f867cb5c0d8" (UID: "3e8f3f19-b063-45a3-806d-1f867cb5c0d8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.210838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-scripts" (OuterVolumeSpecName: "scripts") pod "3e8f3f19-b063-45a3-806d-1f867cb5c0d8" (UID: "3e8f3f19-b063-45a3-806d-1f867cb5c0d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.214116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-kube-api-access-89n9q" (OuterVolumeSpecName: "kube-api-access-89n9q") pod "3e8f3f19-b063-45a3-806d-1f867cb5c0d8" (UID: "3e8f3f19-b063-45a3-806d-1f867cb5c0d8"). InnerVolumeSpecName "kube-api-access-89n9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.312033 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.312072 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.312085 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89n9q\" (UniqueName: \"kubernetes.io/projected/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-kube-api-access-89n9q\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.312100 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.312112 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e8f3f19-b063-45a3-806d-1f867cb5c0d8-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.424130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9-config-fkpvc" event={"ID":"3e8f3f19-b063-45a3-806d-1f867cb5c0d8","Type":"ContainerDied","Data":"675122eeb7eb2e498f90c08052c099ac62b28df2a090f88066df2290ff81a468"} Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.424189 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675122eeb7eb2e498f90c08052c099ac62b28df2a090f88066df2290ff81a468" Oct 08 06:52:22 crc kubenswrapper[4958]: I1008 06:52:22.424205 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9-config-fkpvc" Oct 08 06:52:23 crc kubenswrapper[4958]: I1008 06:52:23.203252 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-spvp9-config-fkpvc"] Oct 08 06:52:23 crc kubenswrapper[4958]: I1008 06:52:23.211749 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-spvp9-config-fkpvc"] Oct 08 06:52:23 crc kubenswrapper[4958]: I1008 06:52:23.262191 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:52:23 crc kubenswrapper[4958]: I1008 06:52:23.593765 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8f3f19-b063-45a3-806d-1f867cb5c0d8" path="/var/lib/kubelet/pods/3e8f3f19-b063-45a3-806d-1f867cb5c0d8/volumes" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.549315 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e33b-account-create-mzg84"] Oct 08 06:52:24 crc kubenswrapper[4958]: E1008 06:52:24.549983 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43525d65-17da-4bd4-a6bd-c21781daf8ff" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.549999 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="43525d65-17da-4bd4-a6bd-c21781daf8ff" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: E1008 06:52:24.550018 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8f3f19-b063-45a3-806d-1f867cb5c0d8" containerName="ovn-config" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550026 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8f3f19-b063-45a3-806d-1f867cb5c0d8" containerName="ovn-config" Oct 08 06:52:24 crc kubenswrapper[4958]: E1008 06:52:24.550043 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85509aa5-d56b-4bd9-bcd0-9570927a885d" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550051 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="85509aa5-d56b-4bd9-bcd0-9570927a885d" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: E1008 06:52:24.550066 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5576e3-fbd8-493d-b8b5-14417b04caf4" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550073 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5576e3-fbd8-493d-b8b5-14417b04caf4" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550297 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8f3f19-b063-45a3-806d-1f867cb5c0d8" containerName="ovn-config" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550313 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="85509aa5-d56b-4bd9-bcd0-9570927a885d" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550333 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="43525d65-17da-4bd4-a6bd-c21781daf8ff" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550351 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5576e3-fbd8-493d-b8b5-14417b04caf4" containerName="mariadb-database-create" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.550904 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.553214 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.566078 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e33b-account-create-mzg84"] Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.654872 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0295-account-create-kssqx"] Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.656108 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.656718 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m4kl\" (UniqueName: \"kubernetes.io/projected/1b66abd6-edd9-4689-b24a-4803735ee82c-kube-api-access-9m4kl\") pod \"cinder-e33b-account-create-mzg84\" (UID: \"1b66abd6-edd9-4689-b24a-4803735ee82c\") " pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.660759 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.663561 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0295-account-create-kssqx"] Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.758568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjp4v\" (UniqueName: \"kubernetes.io/projected/f01f5ddc-a7e9-40e8-b70f-107fa5130c44-kube-api-access-vjp4v\") pod \"barbican-0295-account-create-kssqx\" (UID: \"f01f5ddc-a7e9-40e8-b70f-107fa5130c44\") " pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.758616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m4kl\" (UniqueName: \"kubernetes.io/projected/1b66abd6-edd9-4689-b24a-4803735ee82c-kube-api-access-9m4kl\") pod \"cinder-e33b-account-create-mzg84\" (UID: \"1b66abd6-edd9-4689-b24a-4803735ee82c\") " pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.775856 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m4kl\" (UniqueName: \"kubernetes.io/projected/1b66abd6-edd9-4689-b24a-4803735ee82c-kube-api-access-9m4kl\") pod \"cinder-e33b-account-create-mzg84\" (UID: \"1b66abd6-edd9-4689-b24a-4803735ee82c\") " pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.861101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjp4v\" (UniqueName: \"kubernetes.io/projected/f01f5ddc-a7e9-40e8-b70f-107fa5130c44-kube-api-access-vjp4v\") pod \"barbican-0295-account-create-kssqx\" (UID: \"f01f5ddc-a7e9-40e8-b70f-107fa5130c44\") " pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.873279 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.876657 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjp4v\" (UniqueName: \"kubernetes.io/projected/f01f5ddc-a7e9-40e8-b70f-107fa5130c44-kube-api-access-vjp4v\") pod \"barbican-0295-account-create-kssqx\" (UID: \"f01f5ddc-a7e9-40e8-b70f-107fa5130c44\") " pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.944088 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8595-account-create-snnb5"] Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.945051 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.947150 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.958863 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8595-account-create-snnb5"] Oct 08 06:52:24 crc kubenswrapper[4958]: I1008 06:52:24.978615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:25 crc kubenswrapper[4958]: I1008 06:52:25.063903 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlgvn\" (UniqueName: \"kubernetes.io/projected/cfae700b-a979-49b7-839d-bd3aa3363423-kube-api-access-rlgvn\") pod \"neutron-8595-account-create-snnb5\" (UID: \"cfae700b-a979-49b7-839d-bd3aa3363423\") " pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:25 crc kubenswrapper[4958]: I1008 06:52:25.165878 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlgvn\" (UniqueName: \"kubernetes.io/projected/cfae700b-a979-49b7-839d-bd3aa3363423-kube-api-access-rlgvn\") pod \"neutron-8595-account-create-snnb5\" (UID: \"cfae700b-a979-49b7-839d-bd3aa3363423\") " pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:25 crc kubenswrapper[4958]: I1008 06:52:25.194663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlgvn\" (UniqueName: \"kubernetes.io/projected/cfae700b-a979-49b7-839d-bd3aa3363423-kube-api-access-rlgvn\") pod \"neutron-8595-account-create-snnb5\" (UID: \"cfae700b-a979-49b7-839d-bd3aa3363423\") " pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:25 crc kubenswrapper[4958]: I1008 06:52:25.261872 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.343207 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e33b-account-create-mzg84"] Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.405570 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0295-account-create-kssqx"] Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.411969 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8595-account-create-snnb5"] Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.534712 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f72ls" event={"ID":"3135b952-56ad-4f48-a839-632adc6b8856","Type":"ContainerStarted","Data":"153a18a4416f06d0635bfb0680262cdb03c0d67457a5650a0a56b35b8c9c2dd7"} Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.536761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0295-account-create-kssqx" event={"ID":"f01f5ddc-a7e9-40e8-b70f-107fa5130c44","Type":"ContainerStarted","Data":"cb1f808dad2dcacabdcf9467cfe9a7bdeb095ce779fc2a7415c305beb10df649"} Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.538690 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e33b-account-create-mzg84" event={"ID":"1b66abd6-edd9-4689-b24a-4803735ee82c","Type":"ContainerStarted","Data":"4590c8a910fba3039286cd5008abed771287df6ab7eec3b685f1fb0204c2403e"} Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.538721 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e33b-account-create-mzg84" event={"ID":"1b66abd6-edd9-4689-b24a-4803735ee82c","Type":"ContainerStarted","Data":"4402b2ff65efd598946ff0125dc310f065d7e72db225b06ea95d32beba8c93a6"} Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.544228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" event={"ID":"0c2291ff-e9af-4016-a519-1d8251ffa975","Type":"ContainerStarted","Data":"ece93e76e93080675a52102e60d92b1ae621efbede31e3c0111769ac4d2d73cc"} Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.544377 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.545605 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8595-account-create-snnb5" event={"ID":"cfae700b-a979-49b7-839d-bd3aa3363423","Type":"ContainerStarted","Data":"e93d2c073af6f77b458b8eb0e2e3bc166117f0e022fc52c2ffd0d76d220df4b0"} Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.554634 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-f72ls" podStartSLOduration=3.307536208 podStartE2EDuration="16.554615404s" podCreationTimestamp="2025-10-08 06:52:14 +0000 UTC" firstStartedPulling="2025-10-08 06:52:16.646922258 +0000 UTC m=+1079.776614859" lastFinishedPulling="2025-10-08 06:52:29.894001444 +0000 UTC m=+1093.023694055" observedRunningTime="2025-10-08 06:52:30.547243475 +0000 UTC m=+1093.676936076" watchObservedRunningTime="2025-10-08 06:52:30.554615404 +0000 UTC m=+1093.684308005" Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.570975 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e33b-account-create-mzg84" podStartSLOduration=6.570959455 podStartE2EDuration="6.570959455s" podCreationTimestamp="2025-10-08 06:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:30.566783892 +0000 UTC m=+1093.696476503" watchObservedRunningTime="2025-10-08 06:52:30.570959455 +0000 UTC m=+1093.700652056" Oct 08 06:52:30 crc kubenswrapper[4958]: I1008 06:52:30.584745 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" podStartSLOduration=13.584728486 podStartE2EDuration="13.584728486s" podCreationTimestamp="2025-10-08 06:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:30.584193052 +0000 UTC m=+1093.713885663" watchObservedRunningTime="2025-10-08 06:52:30.584728486 +0000 UTC m=+1093.714421077" Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.556500 4958 generic.go:334] "Generic (PLEG): container finished" podID="f01f5ddc-a7e9-40e8-b70f-107fa5130c44" containerID="d783604610a9189f256be5e1720df1527b094750fa34cd27977a42f5118d7d5b" exitCode=0 Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.556561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0295-account-create-kssqx" event={"ID":"f01f5ddc-a7e9-40e8-b70f-107fa5130c44","Type":"ContainerDied","Data":"d783604610a9189f256be5e1720df1527b094750fa34cd27977a42f5118d7d5b"} Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.559150 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pjrq5" event={"ID":"f98db36e-5db6-40ec-a540-3b50b4ae0749","Type":"ContainerStarted","Data":"feabc94a85f79793c9a91938bc1f0e46c44fc952c2688f0252f2eb939bc20cd7"} Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.561827 4958 generic.go:334] "Generic (PLEG): container finished" podID="1b66abd6-edd9-4689-b24a-4803735ee82c" containerID="4590c8a910fba3039286cd5008abed771287df6ab7eec3b685f1fb0204c2403e" exitCode=0 Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.561903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e33b-account-create-mzg84" event={"ID":"1b66abd6-edd9-4689-b24a-4803735ee82c","Type":"ContainerDied","Data":"4590c8a910fba3039286cd5008abed771287df6ab7eec3b685f1fb0204c2403e"} Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.564141 4958 generic.go:334] "Generic (PLEG): container finished" podID="cfae700b-a979-49b7-839d-bd3aa3363423" containerID="812d87e862c8d47a60a50d5e027b634ae93f921abb1f5a6989f2c0dcf3164616" exitCode=0 Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.564330 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8595-account-create-snnb5" event={"ID":"cfae700b-a979-49b7-839d-bd3aa3363423","Type":"ContainerDied","Data":"812d87e862c8d47a60a50d5e027b634ae93f921abb1f5a6989f2c0dcf3164616"} Oct 08 06:52:31 crc kubenswrapper[4958]: I1008 06:52:31.647735 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-pjrq5" podStartSLOduration=4.176351595 podStartE2EDuration="18.647703504s" podCreationTimestamp="2025-10-08 06:52:13 +0000 UTC" firstStartedPulling="2025-10-08 06:52:15.481140984 +0000 UTC m=+1078.610833585" lastFinishedPulling="2025-10-08 06:52:29.952492883 +0000 UTC m=+1093.082185494" observedRunningTime="2025-10-08 06:52:31.64532892 +0000 UTC m=+1094.775021561" watchObservedRunningTime="2025-10-08 06:52:31.647703504 +0000 UTC m=+1094.777396155" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.199330 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.207450 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.213603 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.315692 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m4kl\" (UniqueName: \"kubernetes.io/projected/1b66abd6-edd9-4689-b24a-4803735ee82c-kube-api-access-9m4kl\") pod \"1b66abd6-edd9-4689-b24a-4803735ee82c\" (UID: \"1b66abd6-edd9-4689-b24a-4803735ee82c\") " Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.316038 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjp4v\" (UniqueName: \"kubernetes.io/projected/f01f5ddc-a7e9-40e8-b70f-107fa5130c44-kube-api-access-vjp4v\") pod \"f01f5ddc-a7e9-40e8-b70f-107fa5130c44\" (UID: \"f01f5ddc-a7e9-40e8-b70f-107fa5130c44\") " Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.316094 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlgvn\" (UniqueName: \"kubernetes.io/projected/cfae700b-a979-49b7-839d-bd3aa3363423-kube-api-access-rlgvn\") pod \"cfae700b-a979-49b7-839d-bd3aa3363423\" (UID: \"cfae700b-a979-49b7-839d-bd3aa3363423\") " Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.324553 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfae700b-a979-49b7-839d-bd3aa3363423-kube-api-access-rlgvn" (OuterVolumeSpecName: "kube-api-access-rlgvn") pod "cfae700b-a979-49b7-839d-bd3aa3363423" (UID: "cfae700b-a979-49b7-839d-bd3aa3363423"). InnerVolumeSpecName "kube-api-access-rlgvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.324779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b66abd6-edd9-4689-b24a-4803735ee82c-kube-api-access-9m4kl" (OuterVolumeSpecName: "kube-api-access-9m4kl") pod "1b66abd6-edd9-4689-b24a-4803735ee82c" (UID: "1b66abd6-edd9-4689-b24a-4803735ee82c"). InnerVolumeSpecName "kube-api-access-9m4kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.325510 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01f5ddc-a7e9-40e8-b70f-107fa5130c44-kube-api-access-vjp4v" (OuterVolumeSpecName: "kube-api-access-vjp4v") pod "f01f5ddc-a7e9-40e8-b70f-107fa5130c44" (UID: "f01f5ddc-a7e9-40e8-b70f-107fa5130c44"). InnerVolumeSpecName "kube-api-access-vjp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.418750 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjp4v\" (UniqueName: \"kubernetes.io/projected/f01f5ddc-a7e9-40e8-b70f-107fa5130c44-kube-api-access-vjp4v\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.418817 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlgvn\" (UniqueName: \"kubernetes.io/projected/cfae700b-a979-49b7-839d-bd3aa3363423-kube-api-access-rlgvn\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.418827 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m4kl\" (UniqueName: \"kubernetes.io/projected/1b66abd6-edd9-4689-b24a-4803735ee82c-kube-api-access-9m4kl\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.586341 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e33b-account-create-mzg84" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.588247 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8595-account-create-snnb5" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.588852 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e33b-account-create-mzg84" event={"ID":"1b66abd6-edd9-4689-b24a-4803735ee82c","Type":"ContainerDied","Data":"4402b2ff65efd598946ff0125dc310f065d7e72db225b06ea95d32beba8c93a6"} Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.588903 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4402b2ff65efd598946ff0125dc310f065d7e72db225b06ea95d32beba8c93a6" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.588924 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8595-account-create-snnb5" event={"ID":"cfae700b-a979-49b7-839d-bd3aa3363423","Type":"ContainerDied","Data":"e93d2c073af6f77b458b8eb0e2e3bc166117f0e022fc52c2ffd0d76d220df4b0"} Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.588969 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93d2c073af6f77b458b8eb0e2e3bc166117f0e022fc52c2ffd0d76d220df4b0" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.590139 4958 generic.go:334] "Generic (PLEG): container finished" podID="3135b952-56ad-4f48-a839-632adc6b8856" containerID="153a18a4416f06d0635bfb0680262cdb03c0d67457a5650a0a56b35b8c9c2dd7" exitCode=0 Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.590237 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f72ls" event={"ID":"3135b952-56ad-4f48-a839-632adc6b8856","Type":"ContainerDied","Data":"153a18a4416f06d0635bfb0680262cdb03c0d67457a5650a0a56b35b8c9c2dd7"} Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.592427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0295-account-create-kssqx" event={"ID":"f01f5ddc-a7e9-40e8-b70f-107fa5130c44","Type":"ContainerDied","Data":"cb1f808dad2dcacabdcf9467cfe9a7bdeb095ce779fc2a7415c305beb10df649"} Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.592486 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1f808dad2dcacabdcf9467cfe9a7bdeb095ce779fc2a7415c305beb10df649" Oct 08 06:52:33 crc kubenswrapper[4958]: I1008 06:52:33.592527 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0295-account-create-kssqx" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.463056 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.556336 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whffz\" (UniqueName: \"kubernetes.io/projected/3135b952-56ad-4f48-a839-632adc6b8856-kube-api-access-whffz\") pod \"3135b952-56ad-4f48-a839-632adc6b8856\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.556456 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-config-data\") pod \"3135b952-56ad-4f48-a839-632adc6b8856\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.556638 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-combined-ca-bundle\") pod \"3135b952-56ad-4f48-a839-632adc6b8856\" (UID: \"3135b952-56ad-4f48-a839-632adc6b8856\") " Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.573084 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3135b952-56ad-4f48-a839-632adc6b8856-kube-api-access-whffz" (OuterVolumeSpecName: "kube-api-access-whffz") pod "3135b952-56ad-4f48-a839-632adc6b8856" (UID: "3135b952-56ad-4f48-a839-632adc6b8856"). InnerVolumeSpecName "kube-api-access-whffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.602095 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3135b952-56ad-4f48-a839-632adc6b8856" (UID: "3135b952-56ad-4f48-a839-632adc6b8856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.616215 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f72ls" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.657527 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f72ls" event={"ID":"3135b952-56ad-4f48-a839-632adc6b8856","Type":"ContainerDied","Data":"f5777e3369cf2eef1f6d5288d170dc9cd96babc77c91e81bee760559a320ffb3"} Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.657572 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5777e3369cf2eef1f6d5288d170dc9cd96babc77c91e81bee760559a320ffb3" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.660544 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whffz\" (UniqueName: \"kubernetes.io/projected/3135b952-56ad-4f48-a839-632adc6b8856-kube-api-access-whffz\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.660776 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.666645 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-config-data" (OuterVolumeSpecName: "config-data") pod "3135b952-56ad-4f48-a839-632adc6b8856" (UID: "3135b952-56ad-4f48-a839-632adc6b8856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.764274 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3135b952-56ad-4f48-a839-632adc6b8856-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.834384 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hj87r"] Oct 08 06:52:35 crc kubenswrapper[4958]: E1008 06:52:35.836031 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b66abd6-edd9-4689-b24a-4803735ee82c" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.836140 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b66abd6-edd9-4689-b24a-4803735ee82c" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: E1008 06:52:35.836215 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfae700b-a979-49b7-839d-bd3aa3363423" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.836302 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfae700b-a979-49b7-839d-bd3aa3363423" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: E1008 06:52:35.836384 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01f5ddc-a7e9-40e8-b70f-107fa5130c44" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.836449 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01f5ddc-a7e9-40e8-b70f-107fa5130c44" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: E1008 06:52:35.836543 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3135b952-56ad-4f48-a839-632adc6b8856" containerName="keystone-db-sync" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.836819 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3135b952-56ad-4f48-a839-632adc6b8856" containerName="keystone-db-sync" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.837162 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfae700b-a979-49b7-839d-bd3aa3363423" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.837253 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01f5ddc-a7e9-40e8-b70f-107fa5130c44" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.840123 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3135b952-56ad-4f48-a839-632adc6b8856" containerName="keystone-db-sync" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.840244 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b66abd6-edd9-4689-b24a-4803735ee82c" containerName="mariadb-account-create" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.841047 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.843456 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hj87r"] Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.869893 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-8fq9b"] Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.870201 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="dnsmasq-dns" containerID="cri-o://ece93e76e93080675a52102e60d92b1ae621efbede31e3c0111769ac4d2d73cc" gracePeriod=10 Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.896299 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection reset by peer" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.913747 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6877b6c9cc-q5zh2"] Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.916134 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.964099 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6877b6c9cc-q5zh2"] Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.969035 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-fernet-keys\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.969114 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-config-data\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.969152 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-combined-ca-bundle\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.969188 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hbd\" (UniqueName: \"kubernetes.io/projected/2e6b4ff7-3777-4a13-bbf1-11105eae62de-kube-api-access-t4hbd\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.969273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-scripts\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:35 crc kubenswrapper[4958]: I1008 06:52:35.969302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-credential-keys\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.025365 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mcqsd"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.026334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.030742 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.034602 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.034792 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xzt2f" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.037223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mcqsd"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.070628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-fernet-keys\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.070674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-config-data\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.070704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-combined-ca-bundle\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.070723 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4hbd\" (UniqueName: \"kubernetes.io/projected/2e6b4ff7-3777-4a13-bbf1-11105eae62de-kube-api-access-t4hbd\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.070745 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-scripts\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.070764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-swift-storage-0\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.071538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-credential-keys\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.071634 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-config\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.071655 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-sb\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.071674 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5vg\" (UniqueName: \"kubernetes.io/projected/fe6d6350-647e-435f-957e-39fe082462bd-kube-api-access-jt5vg\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.071694 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-svc\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.071714 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-nb\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.072335 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.072758 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.075885 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-combined-ca-bundle\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.077763 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-config-data\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.084278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-fernet-keys\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.085564 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-scripts\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.098419 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-credential-keys\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.099561 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4hbd\" (UniqueName: \"kubernetes.io/projected/2e6b4ff7-3777-4a13-bbf1-11105eae62de-kube-api-access-t4hbd\") pod \"keystone-bootstrap-hj87r\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.118024 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rd4bk"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.119054 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.124852 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.125104 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-htpjt" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.125232 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.125319 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.127153 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.131905 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.132581 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.135449 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rd4bk"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.142240 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.168791 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.172904 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-config\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.172988 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-swift-storage-0\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173014 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-combined-ca-bundle\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173073 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7pb\" (UniqueName: \"kubernetes.io/projected/ba0793ca-c021-446b-914c-06d31ff87445-kube-api-access-jw7pb\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173095 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-config\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-sb\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173135 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5vg\" (UniqueName: \"kubernetes.io/projected/fe6d6350-647e-435f-957e-39fe082462bd-kube-api-access-jt5vg\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173152 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-svc\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.173175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-nb\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.174049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-nb\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.174604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-swift-storage-0\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.176455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-sb\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.177039 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-svc\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.181437 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-config\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.211111 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h7mqh"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.213235 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.215549 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cppfl" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.216528 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5vg\" (UniqueName: \"kubernetes.io/projected/fe6d6350-647e-435f-957e-39fe082462bd-kube-api-access-jt5vg\") pod \"dnsmasq-dns-6877b6c9cc-q5zh2\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.216798 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.227004 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h7mqh"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.242443 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.275814 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-config\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.275867 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-config-data\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.275887 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-log-httpd\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.275924 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-config-data\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.275977 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-combined-ca-bundle\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.275997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f07f416-2847-46dd-b003-1cb2f1a9dda9-etc-machine-id\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-scripts\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276053 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s72r\" (UniqueName: \"kubernetes.io/projected/32a2642e-02a6-45f7-aa64-a98d0fc84c01-kube-api-access-4s72r\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-db-sync-config-data\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-combined-ca-bundle\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw7pb\" (UniqueName: \"kubernetes.io/projected/ba0793ca-c021-446b-914c-06d31ff87445-kube-api-access-jw7pb\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276155 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-run-httpd\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67q2b\" (UniqueName: \"kubernetes.io/projected/4f07f416-2847-46dd-b003-1cb2f1a9dda9-kube-api-access-67q2b\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276194 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-scripts\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.276211 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.281703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-combined-ca-bundle\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.281761 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-plzfw"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.282580 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-config\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.282911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.284483 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pt74v" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.295184 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.295488 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.306012 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6877b6c9cc-q5zh2"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.306400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw7pb\" (UniqueName: \"kubernetes.io/projected/ba0793ca-c021-446b-914c-06d31ff87445-kube-api-access-jw7pb\") pod \"neutron-db-sync-mcqsd\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.319738 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-plzfw"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.325832 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d96c67b5-vhhg5"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.327274 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.334258 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d96c67b5-vhhg5"] Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.344683 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-scripts\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gf72\" (UniqueName: \"kubernetes.io/projected/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-kube-api-access-6gf72\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378784 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378811 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-combined-ca-bundle\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-logs\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-config-data\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-log-httpd\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-config-data\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378961 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f07f416-2847-46dd-b003-1cb2f1a9dda9-etc-machine-id\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.378980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-scripts\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379003 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-db-sync-config-data\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379031 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-scripts\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379053 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-combined-ca-bundle\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379076 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s72r\" (UniqueName: \"kubernetes.io/projected/32a2642e-02a6-45f7-aa64-a98d0fc84c01-kube-api-access-4s72r\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379100 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gjp\" (UniqueName: \"kubernetes.io/projected/530f954e-7750-4f47-896a-7568c626c8ac-kube-api-access-z4gjp\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379116 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-db-sync-config-data\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379136 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-combined-ca-bundle\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379149 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379169 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-config-data\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-run-httpd\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.379201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67q2b\" (UniqueName: \"kubernetes.io/projected/4f07f416-2847-46dd-b003-1cb2f1a9dda9-kube-api-access-67q2b\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.380126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f07f416-2847-46dd-b003-1cb2f1a9dda9-etc-machine-id\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.381519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-log-httpd\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.382569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-run-httpd\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.387209 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-scripts\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.387889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-db-sync-config-data\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.388324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-scripts\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.388426 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.388596 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.389417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-combined-ca-bundle\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.390134 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-config-data\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.390643 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-config-data\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.402661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s72r\" (UniqueName: \"kubernetes.io/projected/32a2642e-02a6-45f7-aa64-a98d0fc84c01-kube-api-access-4s72r\") pod \"ceilometer-0\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.404589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67q2b\" (UniqueName: \"kubernetes.io/projected/4f07f416-2847-46dd-b003-1cb2f1a9dda9-kube-api-access-67q2b\") pod \"cinder-db-sync-rd4bk\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.477583 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480703 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4gjp\" (UniqueName: \"kubernetes.io/projected/530f954e-7750-4f47-896a-7568c626c8ac-kube-api-access-z4gjp\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-config\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480769 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-config-data\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480787 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480817 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gf72\" (UniqueName: \"kubernetes.io/projected/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-kube-api-access-6gf72\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480834 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-combined-ca-bundle\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480898 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-logs\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480960 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwg6\" (UniqueName: \"kubernetes.io/projected/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-kube-api-access-bvwg6\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.480978 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-scripts\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.481001 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-db-sync-config-data\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.481023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-combined-ca-bundle\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.481042 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.481057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-svc\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.487425 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-logs\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.490371 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-db-sync-config-data\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.490528 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-config-data\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.492219 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-combined-ca-bundle\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.495121 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-scripts\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.495484 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-combined-ca-bundle\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.503545 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.504634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gf72\" (UniqueName: \"kubernetes.io/projected/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-kube-api-access-6gf72\") pod \"placement-db-sync-plzfw\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.509564 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4gjp\" (UniqueName: \"kubernetes.io/projected/530f954e-7750-4f47-896a-7568c626c8ac-kube-api-access-z4gjp\") pod \"barbican-db-sync-h7mqh\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.544888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.584659 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-config\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.584721 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.584767 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.584867 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwg6\" (UniqueName: \"kubernetes.io/projected/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-kube-api-access-bvwg6\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.584915 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.584937 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-svc\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.586062 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-svc\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.586746 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-config\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.587858 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.588522 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.588551 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.611465 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plzfw" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.633337 4958 generic.go:334] "Generic (PLEG): container finished" podID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerID="ece93e76e93080675a52102e60d92b1ae621efbede31e3c0111769ac4d2d73cc" exitCode=0 Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.633419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" event={"ID":"0c2291ff-e9af-4016-a519-1d8251ffa975","Type":"ContainerDied","Data":"ece93e76e93080675a52102e60d92b1ae621efbede31e3c0111769ac4d2d73cc"} Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.650644 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwg6\" (UniqueName: \"kubernetes.io/projected/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-kube-api-access-bvwg6\") pod \"dnsmasq-dns-7d96c67b5-vhhg5\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.657412 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.707084 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hj87r"] Oct 08 06:52:36 crc kubenswrapper[4958]: W1008 06:52:36.720219 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e6b4ff7_3777_4a13_bbf1_11105eae62de.slice/crio-07a18d282f85626c3911779c8f6e35e26fe09f7bd91d607bff631f863030acae WatchSource:0}: Error finding container 07a18d282f85626c3911779c8f6e35e26fe09f7bd91d607bff631f863030acae: Status 404 returned error can't find the container with id 07a18d282f85626c3911779c8f6e35e26fe09f7bd91d607bff631f863030acae Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.850798 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6877b6c9cc-q5zh2"] Oct 08 06:52:36 crc kubenswrapper[4958]: W1008 06:52:36.858317 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe6d6350_647e_435f_957e_39fe082462bd.slice/crio-24b145dd60fb44cc9b41992484f29bf80f1d9c39bb404468dfc143ad3b9dd53e WatchSource:0}: Error finding container 24b145dd60fb44cc9b41992484f29bf80f1d9c39bb404468dfc143ad3b9dd53e: Status 404 returned error can't find the container with id 24b145dd60fb44cc9b41992484f29bf80f1d9c39bb404468dfc143ad3b9dd53e Oct 08 06:52:36 crc kubenswrapper[4958]: I1008 06:52:36.892919 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mcqsd"] Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.014546 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rd4bk"] Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.130903 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.259079 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-plzfw"] Oct 08 06:52:37 crc kubenswrapper[4958]: W1008 06:52:37.263822 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19fed95_a8e9_4e26_b7b8_d6d2f105578a.slice/crio-d52c8715f692ee0837ff6fad4b3c27f6af2b65a053dbf02dbfd29de4db5a237f WatchSource:0}: Error finding container d52c8715f692ee0837ff6fad4b3c27f6af2b65a053dbf02dbfd29de4db5a237f: Status 404 returned error can't find the container with id d52c8715f692ee0837ff6fad4b3c27f6af2b65a053dbf02dbfd29de4db5a237f Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.270028 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h7mqh"] Oct 08 06:52:37 crc kubenswrapper[4958]: W1008 06:52:37.279733 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod530f954e_7750_4f47_896a_7568c626c8ac.slice/crio-de6655ff6585800600358b4a9f5799002bb16abc99870ba0f80dfe235a74b52a WatchSource:0}: Error finding container de6655ff6585800600358b4a9f5799002bb16abc99870ba0f80dfe235a74b52a: Status 404 returned error can't find the container with id de6655ff6585800600358b4a9f5799002bb16abc99870ba0f80dfe235a74b52a Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.286093 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d96c67b5-vhhg5"] Oct 08 06:52:37 crc kubenswrapper[4958]: W1008 06:52:37.286300 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7afab53_9ded_43aa_b3ee_cbd88cfb8c1d.slice/crio-771ac5c0cd080aaf775bcc7fb75113c095feb2f86437ce674a80d8a6e99c9825 WatchSource:0}: Error finding container 771ac5c0cd080aaf775bcc7fb75113c095feb2f86437ce674a80d8a6e99c9825: Status 404 returned error can't find the container with id 771ac5c0cd080aaf775bcc7fb75113c095feb2f86437ce674a80d8a6e99c9825 Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.654940 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerStarted","Data":"eb766a2792a999d2c1364ed80804cb655ed7056c20e580edd255f16a63cc1902"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.659369 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mcqsd" event={"ID":"ba0793ca-c021-446b-914c-06d31ff87445","Type":"ContainerStarted","Data":"e5334fb729508a7bbf114649e04894d7c29028a49276bad3ac18370aa8c57781"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.661054 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hj87r" event={"ID":"2e6b4ff7-3777-4a13-bbf1-11105eae62de","Type":"ContainerStarted","Data":"07a18d282f85626c3911779c8f6e35e26fe09f7bd91d607bff631f863030acae"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.662202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plzfw" event={"ID":"e19fed95-a8e9-4e26-b7b8-d6d2f105578a","Type":"ContainerStarted","Data":"d52c8715f692ee0837ff6fad4b3c27f6af2b65a053dbf02dbfd29de4db5a237f"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.664342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7mqh" event={"ID":"530f954e-7750-4f47-896a-7568c626c8ac","Type":"ContainerStarted","Data":"de6655ff6585800600358b4a9f5799002bb16abc99870ba0f80dfe235a74b52a"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.666062 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" event={"ID":"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d","Type":"ContainerStarted","Data":"771ac5c0cd080aaf775bcc7fb75113c095feb2f86437ce674a80d8a6e99c9825"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.667649 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" event={"ID":"fe6d6350-647e-435f-957e-39fe082462bd","Type":"ContainerStarted","Data":"24b145dd60fb44cc9b41992484f29bf80f1d9c39bb404468dfc143ad3b9dd53e"} Oct 08 06:52:37 crc kubenswrapper[4958]: I1008 06:52:37.669730 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rd4bk" event={"ID":"4f07f416-2847-46dd-b003-1cb2f1a9dda9","Type":"ContainerStarted","Data":"89d8eae2baca6fb08cf89c3b33641f263e1d69ee1823eb5691e15a183b13b809"} Oct 08 06:52:38 crc kubenswrapper[4958]: I1008 06:52:37.999978 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Oct 08 06:52:38 crc kubenswrapper[4958]: I1008 06:52:38.257911 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.713489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hj87r" event={"ID":"2e6b4ff7-3777-4a13-bbf1-11105eae62de","Type":"ContainerStarted","Data":"5ac0c0d9c6a789604245fcea706bdcea4c122528641faf5d1ca953ccc32728a2"} Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.725553 4958 generic.go:334] "Generic (PLEG): container finished" podID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerID="cdaaf599d5cacd557cd203a2ab94cf3b122aa0c75b0aa9d2d73d26914f2202ad" exitCode=0 Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.726030 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" event={"ID":"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d","Type":"ContainerDied","Data":"cdaaf599d5cacd557cd203a2ab94cf3b122aa0c75b0aa9d2d73d26914f2202ad"} Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.731789 4958 generic.go:334] "Generic (PLEG): container finished" podID="fe6d6350-647e-435f-957e-39fe082462bd" containerID="342e6441b35aca563a9fcfaedd304b25cbeb6672585abeec3b26ba41a33a768d" exitCode=0 Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.732131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" event={"ID":"fe6d6350-647e-435f-957e-39fe082462bd","Type":"ContainerDied","Data":"342e6441b35aca563a9fcfaedd304b25cbeb6672585abeec3b26ba41a33a768d"} Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.735795 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" event={"ID":"0c2291ff-e9af-4016-a519-1d8251ffa975","Type":"ContainerDied","Data":"4bb99bc17e1f37ff2784de371b08918d6df75b82540263c5361f4a7bb437a2f8"} Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.735921 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb99bc17e1f37ff2784de371b08918d6df75b82540263c5361f4a7bb437a2f8" Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.737899 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hj87r" podStartSLOduration=4.737890941 podStartE2EDuration="4.737890941s" podCreationTimestamp="2025-10-08 06:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:39.734425197 +0000 UTC m=+1102.864117798" watchObservedRunningTime="2025-10-08 06:52:39.737890941 +0000 UTC m=+1102.867583542" Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.762382 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mcqsd" event={"ID":"ba0793ca-c021-446b-914c-06d31ff87445","Type":"ContainerStarted","Data":"ca95eff0d7855e4b9391469f340662ffafa4e08cc2ef525ae33e4fa92980145a"} Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.798566 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mcqsd" podStartSLOduration=3.798549258 podStartE2EDuration="3.798549258s" podCreationTimestamp="2025-10-08 06:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:39.79602224 +0000 UTC m=+1102.925714841" watchObservedRunningTime="2025-10-08 06:52:39.798549258 +0000 UTC m=+1102.928241859" Oct 08 06:52:39 crc kubenswrapper[4958]: I1008 06:52:39.947247 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.099114 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgn5\" (UniqueName: \"kubernetes.io/projected/0c2291ff-e9af-4016-a519-1d8251ffa975-kube-api-access-nfgn5\") pod \"0c2291ff-e9af-4016-a519-1d8251ffa975\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.099556 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-nb\") pod \"0c2291ff-e9af-4016-a519-1d8251ffa975\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.099596 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-swift-storage-0\") pod \"0c2291ff-e9af-4016-a519-1d8251ffa975\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.099633 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-svc\") pod \"0c2291ff-e9af-4016-a519-1d8251ffa975\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.099681 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-config\") pod \"0c2291ff-e9af-4016-a519-1d8251ffa975\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.099710 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-sb\") pod \"0c2291ff-e9af-4016-a519-1d8251ffa975\" (UID: \"0c2291ff-e9af-4016-a519-1d8251ffa975\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.137293 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2291ff-e9af-4016-a519-1d8251ffa975-kube-api-access-nfgn5" (OuterVolumeSpecName: "kube-api-access-nfgn5") pod "0c2291ff-e9af-4016-a519-1d8251ffa975" (UID: "0c2291ff-e9af-4016-a519-1d8251ffa975"). InnerVolumeSpecName "kube-api-access-nfgn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.152674 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.178063 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c2291ff-e9af-4016-a519-1d8251ffa975" (UID: "0c2291ff-e9af-4016-a519-1d8251ffa975"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.189365 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-config" (OuterVolumeSpecName: "config") pod "0c2291ff-e9af-4016-a519-1d8251ffa975" (UID: "0c2291ff-e9af-4016-a519-1d8251ffa975"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.194426 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c2291ff-e9af-4016-a519-1d8251ffa975" (UID: "0c2291ff-e9af-4016-a519-1d8251ffa975"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.196835 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c2291ff-e9af-4016-a519-1d8251ffa975" (UID: "0c2291ff-e9af-4016-a519-1d8251ffa975"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.199375 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c2291ff-e9af-4016-a519-1d8251ffa975" (UID: "0c2291ff-e9af-4016-a519-1d8251ffa975"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.201638 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.201668 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.201688 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.201698 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.201707 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgn5\" (UniqueName: \"kubernetes.io/projected/0c2291ff-e9af-4016-a519-1d8251ffa975-kube-api-access-nfgn5\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.201717 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2291ff-e9af-4016-a519-1d8251ffa975-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.302465 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-config\") pod \"fe6d6350-647e-435f-957e-39fe082462bd\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.302563 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-sb\") pod \"fe6d6350-647e-435f-957e-39fe082462bd\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.302638 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-swift-storage-0\") pod \"fe6d6350-647e-435f-957e-39fe082462bd\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.302705 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-svc\") pod \"fe6d6350-647e-435f-957e-39fe082462bd\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.302735 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt5vg\" (UniqueName: \"kubernetes.io/projected/fe6d6350-647e-435f-957e-39fe082462bd-kube-api-access-jt5vg\") pod \"fe6d6350-647e-435f-957e-39fe082462bd\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.302789 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-nb\") pod \"fe6d6350-647e-435f-957e-39fe082462bd\" (UID: \"fe6d6350-647e-435f-957e-39fe082462bd\") " Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.308678 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6d6350-647e-435f-957e-39fe082462bd-kube-api-access-jt5vg" (OuterVolumeSpecName: "kube-api-access-jt5vg") pod "fe6d6350-647e-435f-957e-39fe082462bd" (UID: "fe6d6350-647e-435f-957e-39fe082462bd"). InnerVolumeSpecName "kube-api-access-jt5vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.324565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe6d6350-647e-435f-957e-39fe082462bd" (UID: "fe6d6350-647e-435f-957e-39fe082462bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.328565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe6d6350-647e-435f-957e-39fe082462bd" (UID: "fe6d6350-647e-435f-957e-39fe082462bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.331326 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-config" (OuterVolumeSpecName: "config") pod "fe6d6350-647e-435f-957e-39fe082462bd" (UID: "fe6d6350-647e-435f-957e-39fe082462bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.331497 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe6d6350-647e-435f-957e-39fe082462bd" (UID: "fe6d6350-647e-435f-957e-39fe082462bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.349689 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe6d6350-647e-435f-957e-39fe082462bd" (UID: "fe6d6350-647e-435f-957e-39fe082462bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.404252 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.404286 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.404299 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.404308 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt5vg\" (UniqueName: \"kubernetes.io/projected/fe6d6350-647e-435f-957e-39fe082462bd-kube-api-access-jt5vg\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.404319 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.404327 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe6d6350-647e-435f-957e-39fe082462bd-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.781080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" event={"ID":"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d","Type":"ContainerStarted","Data":"c41549c80a76c9f1b1d9c4124b11256d2d2f7baf382b8bc027d8bab6e82b075d"} Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.781393 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.784220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" event={"ID":"fe6d6350-647e-435f-957e-39fe082462bd","Type":"ContainerDied","Data":"24b145dd60fb44cc9b41992484f29bf80f1d9c39bb404468dfc143ad3b9dd53e"} Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.784289 4958 scope.go:117] "RemoveContainer" containerID="342e6441b35aca563a9fcfaedd304b25cbeb6672585abeec3b26ba41a33a768d" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.784479 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877b6c9cc-q5zh2" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.784513 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564965cbfc-8fq9b" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.805817 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" podStartSLOduration=4.805798653 podStartE2EDuration="4.805798653s" podCreationTimestamp="2025-10-08 06:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:52:40.796399379 +0000 UTC m=+1103.926091980" watchObservedRunningTime="2025-10-08 06:52:40.805798653 +0000 UTC m=+1103.935491254" Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.829694 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-8fq9b"] Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.835714 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-564965cbfc-8fq9b"] Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.903063 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6877b6c9cc-q5zh2"] Oct 08 06:52:40 crc kubenswrapper[4958]: I1008 06:52:40.905077 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6877b6c9cc-q5zh2"] Oct 08 06:52:41 crc kubenswrapper[4958]: I1008 06:52:41.586818 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" path="/var/lib/kubelet/pods/0c2291ff-e9af-4016-a519-1d8251ffa975/volumes" Oct 08 06:52:41 crc kubenswrapper[4958]: I1008 06:52:41.587431 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6d6350-647e-435f-957e-39fe082462bd" path="/var/lib/kubelet/pods/fe6d6350-647e-435f-957e-39fe082462bd/volumes" Oct 08 06:52:41 crc kubenswrapper[4958]: I1008 06:52:41.797245 4958 generic.go:334] "Generic (PLEG): container finished" podID="f98db36e-5db6-40ec-a540-3b50b4ae0749" containerID="feabc94a85f79793c9a91938bc1f0e46c44fc952c2688f0252f2eb939bc20cd7" exitCode=0 Oct 08 06:52:41 crc kubenswrapper[4958]: I1008 06:52:41.797292 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pjrq5" event={"ID":"f98db36e-5db6-40ec-a540-3b50b4ae0749","Type":"ContainerDied","Data":"feabc94a85f79793c9a91938bc1f0e46c44fc952c2688f0252f2eb939bc20cd7"} Oct 08 06:52:42 crc kubenswrapper[4958]: I1008 06:52:42.809417 4958 generic.go:334] "Generic (PLEG): container finished" podID="2e6b4ff7-3777-4a13-bbf1-11105eae62de" containerID="5ac0c0d9c6a789604245fcea706bdcea4c122528641faf5d1ca953ccc32728a2" exitCode=0 Oct 08 06:52:42 crc kubenswrapper[4958]: I1008 06:52:42.809874 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hj87r" event={"ID":"2e6b4ff7-3777-4a13-bbf1-11105eae62de","Type":"ContainerDied","Data":"5ac0c0d9c6a789604245fcea706bdcea4c122528641faf5d1ca953ccc32728a2"} Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.316689 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.422023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-config-data\") pod \"f98db36e-5db6-40ec-a540-3b50b4ae0749\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.422095 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxrl7\" (UniqueName: \"kubernetes.io/projected/f98db36e-5db6-40ec-a540-3b50b4ae0749-kube-api-access-sxrl7\") pod \"f98db36e-5db6-40ec-a540-3b50b4ae0749\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.422136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-combined-ca-bundle\") pod \"f98db36e-5db6-40ec-a540-3b50b4ae0749\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.422282 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-db-sync-config-data\") pod \"f98db36e-5db6-40ec-a540-3b50b4ae0749\" (UID: \"f98db36e-5db6-40ec-a540-3b50b4ae0749\") " Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.429584 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98db36e-5db6-40ec-a540-3b50b4ae0749-kube-api-access-sxrl7" (OuterVolumeSpecName: "kube-api-access-sxrl7") pod "f98db36e-5db6-40ec-a540-3b50b4ae0749" (UID: "f98db36e-5db6-40ec-a540-3b50b4ae0749"). InnerVolumeSpecName "kube-api-access-sxrl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.438092 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f98db36e-5db6-40ec-a540-3b50b4ae0749" (UID: "f98db36e-5db6-40ec-a540-3b50b4ae0749"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.452095 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f98db36e-5db6-40ec-a540-3b50b4ae0749" (UID: "f98db36e-5db6-40ec-a540-3b50b4ae0749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.483271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-config-data" (OuterVolumeSpecName: "config-data") pod "f98db36e-5db6-40ec-a540-3b50b4ae0749" (UID: "f98db36e-5db6-40ec-a540-3b50b4ae0749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.524378 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.524408 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.524417 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxrl7\" (UniqueName: \"kubernetes.io/projected/f98db36e-5db6-40ec-a540-3b50b4ae0749-kube-api-access-sxrl7\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.524427 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98db36e-5db6-40ec-a540-3b50b4ae0749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.820504 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-pjrq5" Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.820547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-pjrq5" event={"ID":"f98db36e-5db6-40ec-a540-3b50b4ae0749","Type":"ContainerDied","Data":"8ea5d25eeb8872d001621ff2b19c79a1378df28ebe72545b042e57874346a946"} Oct 08 06:52:43 crc kubenswrapper[4958]: I1008 06:52:43.820573 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea5d25eeb8872d001621ff2b19c79a1378df28ebe72545b042e57874346a946" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.235630 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d96c67b5-vhhg5"] Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.236609 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" containerID="cri-o://c41549c80a76c9f1b1d9c4124b11256d2d2f7baf382b8bc027d8bab6e82b075d" gracePeriod=10 Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.291481 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-2kqhl"] Oct 08 06:52:44 crc kubenswrapper[4958]: E1008 06:52:44.291791 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="init" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.291804 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="init" Oct 08 06:52:44 crc kubenswrapper[4958]: E1008 06:52:44.291814 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6d6350-647e-435f-957e-39fe082462bd" containerName="init" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.291820 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6d6350-647e-435f-957e-39fe082462bd" containerName="init" Oct 08 06:52:44 crc kubenswrapper[4958]: E1008 06:52:44.291837 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98db36e-5db6-40ec-a540-3b50b4ae0749" containerName="glance-db-sync" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.291843 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98db36e-5db6-40ec-a540-3b50b4ae0749" containerName="glance-db-sync" Oct 08 06:52:44 crc kubenswrapper[4958]: E1008 06:52:44.291855 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="dnsmasq-dns" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.291861 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="dnsmasq-dns" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.292039 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2291ff-e9af-4016-a519-1d8251ffa975" containerName="dnsmasq-dns" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.292068 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98db36e-5db6-40ec-a540-3b50b4ae0749" containerName="glance-db-sync" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.292081 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6d6350-647e-435f-957e-39fe082462bd" containerName="init" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.292854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.338253 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-2kqhl"] Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.352795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnj6\" (UniqueName: \"kubernetes.io/projected/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-kube-api-access-lhnj6\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.352889 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.352929 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.352979 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-config\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.353010 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.353033 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.455621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-config\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.455675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.455704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.455738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnj6\" (UniqueName: \"kubernetes.io/projected/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-kube-api-access-lhnj6\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.455807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.455849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.456722 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.457247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-config\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.457731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.458239 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-svc\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.458708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.496885 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnj6\" (UniqueName: \"kubernetes.io/projected/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-kube-api-access-lhnj6\") pod \"dnsmasq-dns-5dc68bd5-2kqhl\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.628018 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.834788 4958 generic.go:334] "Generic (PLEG): container finished" podID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerID="c41549c80a76c9f1b1d9c4124b11256d2d2f7baf382b8bc027d8bab6e82b075d" exitCode=0 Oct 08 06:52:44 crc kubenswrapper[4958]: I1008 06:52:44.834829 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" event={"ID":"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d","Type":"ContainerDied","Data":"c41549c80a76c9f1b1d9c4124b11256d2d2f7baf382b8bc027d8bab6e82b075d"} Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.115603 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.116847 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.118663 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cqgjb" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.119672 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.120794 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.126371 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.168459 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5vf\" (UniqueName: \"kubernetes.io/projected/49e0437c-88d8-4fd7-8349-c49fc431071b-kube-api-access-vd5vf\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.168502 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-logs\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.168522 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.168551 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-scripts\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.169338 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-config-data\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.169441 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.169537 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270775 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5vf\" (UniqueName: \"kubernetes.io/projected/49e0437c-88d8-4fd7-8349-c49fc431071b-kube-api-access-vd5vf\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270796 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-logs\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-scripts\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270921 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-config-data\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.270978 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.271233 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.271312 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-logs\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.272246 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.276394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.282363 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-scripts\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.285673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-config-data\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.293601 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5vf\" (UniqueName: \"kubernetes.io/projected/49e0437c-88d8-4fd7-8349-c49fc431071b-kube-api-access-vd5vf\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.310168 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.441311 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.512074 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.525165 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.525310 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.537203 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.575459 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6gd\" (UniqueName: \"kubernetes.io/projected/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-kube-api-access-bj6gd\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.575633 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.575722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-logs\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.575841 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.575884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.575939 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.576296 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj6gd\" (UniqueName: \"kubernetes.io/projected/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-kube-api-access-bj6gd\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677791 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-logs\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677885 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.677929 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.678192 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.678268 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-logs\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.678601 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.681978 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.683321 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.683344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.695862 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj6gd\" (UniqueName: \"kubernetes.io/projected/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-kube-api-access-bj6gd\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.701167 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:52:45 crc kubenswrapper[4958]: I1008 06:52:45.853254 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:52:46 crc kubenswrapper[4958]: I1008 06:52:46.659614 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Oct 08 06:52:46 crc kubenswrapper[4958]: I1008 06:52:46.752532 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:52:46 crc kubenswrapper[4958]: I1008 06:52:46.818357 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.427907 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.527797 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-scripts\") pod \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.527874 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-combined-ca-bundle\") pod \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.527904 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-credential-keys\") pod \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.527943 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-fernet-keys\") pod \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.528037 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-config-data\") pod \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.528980 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4hbd\" (UniqueName: \"kubernetes.io/projected/2e6b4ff7-3777-4a13-bbf1-11105eae62de-kube-api-access-t4hbd\") pod \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\" (UID: \"2e6b4ff7-3777-4a13-bbf1-11105eae62de\") " Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.534650 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2e6b4ff7-3777-4a13-bbf1-11105eae62de" (UID: "2e6b4ff7-3777-4a13-bbf1-11105eae62de"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.534721 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-scripts" (OuterVolumeSpecName: "scripts") pod "2e6b4ff7-3777-4a13-bbf1-11105eae62de" (UID: "2e6b4ff7-3777-4a13-bbf1-11105eae62de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.535114 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2e6b4ff7-3777-4a13-bbf1-11105eae62de" (UID: "2e6b4ff7-3777-4a13-bbf1-11105eae62de"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.536264 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6b4ff7-3777-4a13-bbf1-11105eae62de-kube-api-access-t4hbd" (OuterVolumeSpecName: "kube-api-access-t4hbd") pod "2e6b4ff7-3777-4a13-bbf1-11105eae62de" (UID: "2e6b4ff7-3777-4a13-bbf1-11105eae62de"). InnerVolumeSpecName "kube-api-access-t4hbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.562253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-config-data" (OuterVolumeSpecName: "config-data") pod "2e6b4ff7-3777-4a13-bbf1-11105eae62de" (UID: "2e6b4ff7-3777-4a13-bbf1-11105eae62de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.572663 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e6b4ff7-3777-4a13-bbf1-11105eae62de" (UID: "2e6b4ff7-3777-4a13-bbf1-11105eae62de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.631296 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.631327 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.631336 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.631344 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.631355 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6b4ff7-3777-4a13-bbf1-11105eae62de-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.631365 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4hbd\" (UniqueName: \"kubernetes.io/projected/2e6b4ff7-3777-4a13-bbf1-11105eae62de-kube-api-access-t4hbd\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.880483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hj87r" event={"ID":"2e6b4ff7-3777-4a13-bbf1-11105eae62de","Type":"ContainerDied","Data":"07a18d282f85626c3911779c8f6e35e26fe09f7bd91d607bff631f863030acae"} Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.880523 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a18d282f85626c3911779c8f6e35e26fe09f7bd91d607bff631f863030acae" Oct 08 06:52:48 crc kubenswrapper[4958]: I1008 06:52:48.880561 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hj87r" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.535663 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hj87r"] Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.548509 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hj87r"] Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.593556 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6b4ff7-3777-4a13-bbf1-11105eae62de" path="/var/lib/kubelet/pods/2e6b4ff7-3777-4a13-bbf1-11105eae62de/volumes" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.634917 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-szq25"] Oct 08 06:52:49 crc kubenswrapper[4958]: E1008 06:52:49.635666 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6b4ff7-3777-4a13-bbf1-11105eae62de" containerName="keystone-bootstrap" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.635685 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6b4ff7-3777-4a13-bbf1-11105eae62de" containerName="keystone-bootstrap" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.635910 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6b4ff7-3777-4a13-bbf1-11105eae62de" containerName="keystone-bootstrap" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.636570 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.639148 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.639551 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.639925 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fhwhd" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.640776 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.643893 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-szq25"] Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.770492 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-credential-keys\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.770559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-fernet-keys\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.770671 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwpx\" (UniqueName: \"kubernetes.io/projected/ae41a8dc-cd5b-4b78-a03f-557d76949983-kube-api-access-lqwpx\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.770722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-config-data\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.770771 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-scripts\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.770787 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-combined-ca-bundle\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.872637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-combined-ca-bundle\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.872697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-credential-keys\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.872732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-fernet-keys\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.872849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqwpx\" (UniqueName: \"kubernetes.io/projected/ae41a8dc-cd5b-4b78-a03f-557d76949983-kube-api-access-lqwpx\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.872882 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-config-data\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.872937 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-scripts\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.879025 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-combined-ca-bundle\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.880726 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-credential-keys\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.881344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-fernet-keys\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.891256 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-config-data\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.894495 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-scripts\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.899220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqwpx\" (UniqueName: \"kubernetes.io/projected/ae41a8dc-cd5b-4b78-a03f-557d76949983-kube-api-access-lqwpx\") pod \"keystone-bootstrap-szq25\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:49 crc kubenswrapper[4958]: I1008 06:52:49.960985 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-szq25" Oct 08 06:52:56 crc kubenswrapper[4958]: I1008 06:52:56.659342 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Oct 08 06:52:56 crc kubenswrapper[4958]: I1008 06:52:56.960844 4958 generic.go:334] "Generic (PLEG): container finished" podID="ba0793ca-c021-446b-914c-06d31ff87445" containerID="ca95eff0d7855e4b9391469f340662ffafa4e08cc2ef525ae33e4fa92980145a" exitCode=0 Oct 08 06:52:56 crc kubenswrapper[4958]: I1008 06:52:56.960933 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mcqsd" event={"ID":"ba0793ca-c021-446b-914c-06d31ff87445","Type":"ContainerDied","Data":"ca95eff0d7855e4b9391469f340662ffafa4e08cc2ef525ae33e4fa92980145a"} Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.580235 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.588854 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666326 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvwg6\" (UniqueName: \"kubernetes.io/projected/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-kube-api-access-bvwg6\") pod \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666379 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-swift-storage-0\") pod \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666458 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-config\") pod \"ba0793ca-c021-446b-914c-06d31ff87445\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666483 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw7pb\" (UniqueName: \"kubernetes.io/projected/ba0793ca-c021-446b-914c-06d31ff87445-kube-api-access-jw7pb\") pod \"ba0793ca-c021-446b-914c-06d31ff87445\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666540 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-svc\") pod \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666592 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-nb\") pod \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666669 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-combined-ca-bundle\") pod \"ba0793ca-c021-446b-914c-06d31ff87445\" (UID: \"ba0793ca-c021-446b-914c-06d31ff87445\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666690 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-config\") pod \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.666711 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-sb\") pod \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\" (UID: \"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d\") " Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.690613 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-kube-api-access-bvwg6" (OuterVolumeSpecName: "kube-api-access-bvwg6") pod "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" (UID: "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d"). InnerVolumeSpecName "kube-api-access-bvwg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.690747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0793ca-c021-446b-914c-06d31ff87445-kube-api-access-jw7pb" (OuterVolumeSpecName: "kube-api-access-jw7pb") pod "ba0793ca-c021-446b-914c-06d31ff87445" (UID: "ba0793ca-c021-446b-914c-06d31ff87445"). InnerVolumeSpecName "kube-api-access-jw7pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.694207 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-config" (OuterVolumeSpecName: "config") pod "ba0793ca-c021-446b-914c-06d31ff87445" (UID: "ba0793ca-c021-446b-914c-06d31ff87445"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.709830 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba0793ca-c021-446b-914c-06d31ff87445" (UID: "ba0793ca-c021-446b-914c-06d31ff87445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.713259 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-config" (OuterVolumeSpecName: "config") pod "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" (UID: "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.720130 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" (UID: "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.727753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" (UID: "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.739424 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" (UID: "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.756700 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" (UID: "b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769235 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769263 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769273 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769284 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769292 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvwg6\" (UniqueName: \"kubernetes.io/projected/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-kube-api-access-bvwg6\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769302 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769310 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ba0793ca-c021-446b-914c-06d31ff87445-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769319 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw7pb\" (UniqueName: \"kubernetes.io/projected/ba0793ca-c021-446b-914c-06d31ff87445-kube-api-access-jw7pb\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.769328 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.994002 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mcqsd" Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.994002 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mcqsd" event={"ID":"ba0793ca-c021-446b-914c-06d31ff87445","Type":"ContainerDied","Data":"e5334fb729508a7bbf114649e04894d7c29028a49276bad3ac18370aa8c57781"} Oct 08 06:52:58 crc kubenswrapper[4958]: I1008 06:52:58.994150 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5334fb729508a7bbf114649e04894d7c29028a49276bad3ac18370aa8c57781" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.021395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" event={"ID":"b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d","Type":"ContainerDied","Data":"771ac5c0cd080aaf775bcc7fb75113c095feb2f86437ce674a80d8a6e99c9825"} Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.021442 4958 scope.go:117] "RemoveContainer" containerID="c41549c80a76c9f1b1d9c4124b11256d2d2f7baf382b8bc027d8bab6e82b075d" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.021573 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.086476 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d96c67b5-vhhg5"] Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.093235 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d96c67b5-vhhg5"] Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.255877 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-2kqhl"] Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.296319 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-b72j7"] Oct 08 06:52:59 crc kubenswrapper[4958]: E1008 06:52:59.296663 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0793ca-c021-446b-914c-06d31ff87445" containerName="neutron-db-sync" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.296682 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0793ca-c021-446b-914c-06d31ff87445" containerName="neutron-db-sync" Oct 08 06:52:59 crc kubenswrapper[4958]: E1008 06:52:59.296703 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.296708 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" Oct 08 06:52:59 crc kubenswrapper[4958]: E1008 06:52:59.296719 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="init" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.296725 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="init" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.296896 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.296906 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0793ca-c021-446b-914c-06d31ff87445" containerName="neutron-db-sync" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.297771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.305997 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-b72j7"] Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.381504 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-svc\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.381568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-swift-storage-0\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.381658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-nb\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.381691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trl7r\" (UniqueName: \"kubernetes.io/projected/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-kube-api-access-trl7r\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.381722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-sb\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.381754 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-config\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.382431 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59fbcb7b56-6l4nd"] Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.383828 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.386435 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.386640 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.390608 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xzt2f" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.390612 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.393677 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59fbcb7b56-6l4nd"] Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.482732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-svc\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.482787 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-ovndb-tls-certs\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.482816 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-swift-storage-0\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.482849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-config\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.482928 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-nb\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.482980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-httpd-config\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483024 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trl7r\" (UniqueName: \"kubernetes.io/projected/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-kube-api-access-trl7r\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483071 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zz7c\" (UniqueName: \"kubernetes.io/projected/db18904a-8657-440d-ad39-60d3bb7907c3-kube-api-access-9zz7c\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483088 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-sb\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-combined-ca-bundle\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483149 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-config\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-svc\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.483863 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-config\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.484277 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-swift-storage-0\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.484415 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-nb\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.484894 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-sb\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.504752 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trl7r\" (UniqueName: \"kubernetes.io/projected/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-kube-api-access-trl7r\") pod \"dnsmasq-dns-67b55c5465-b72j7\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.584373 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-ovndb-tls-certs\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.584451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-config\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.584515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-httpd-config\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.584573 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zz7c\" (UniqueName: \"kubernetes.io/projected/db18904a-8657-440d-ad39-60d3bb7907c3-kube-api-access-9zz7c\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.584607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-combined-ca-bundle\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.596758 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-config\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.597233 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" path="/var/lib/kubelet/pods/b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d/volumes" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.598380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-httpd-config\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.600082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-combined-ca-bundle\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.601576 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-ovndb-tls-certs\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.623804 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zz7c\" (UniqueName: \"kubernetes.io/projected/db18904a-8657-440d-ad39-60d3bb7907c3-kube-api-access-9zz7c\") pod \"neutron-59fbcb7b56-6l4nd\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.626544 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:52:59 crc kubenswrapper[4958]: I1008 06:52:59.703998 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:53:00 crc kubenswrapper[4958]: E1008 06:53:00.275362 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f" Oct 08 06:53:00 crc kubenswrapper[4958]: E1008 06:53:00.275787 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67q2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rd4bk_openstack(4f07f416-2847-46dd-b003-1cb2f1a9dda9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 08 06:53:00 crc kubenswrapper[4958]: E1008 06:53:00.277568 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rd4bk" podUID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" Oct 08 06:53:00 crc kubenswrapper[4958]: I1008 06:53:00.283193 4958 scope.go:117] "RemoveContainer" containerID="cdaaf599d5cacd557cd203a2ab94cf3b122aa0c75b0aa9d2d73d26914f2202ad" Oct 08 06:53:00 crc kubenswrapper[4958]: I1008 06:53:00.784463 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-2kqhl"] Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.040348 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plzfw" event={"ID":"e19fed95-a8e9-4e26-b7b8-d6d2f105578a","Type":"ContainerStarted","Data":"4312838554efe193aa4b07d7c2a6ac19aed00f4fb393b9586861c3822e7c76fc"} Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.054062 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7mqh" event={"ID":"530f954e-7750-4f47-896a-7568c626c8ac","Type":"ContainerStarted","Data":"f34eb08efa67b475f0df3b642b635f5ede09883090a37f940dd4419c81dbeb41"} Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.064209 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerStarted","Data":"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74"} Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.065595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" event={"ID":"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31","Type":"ContainerStarted","Data":"0f439824422790078be9de7effd14a2db12233c53cd420b816e17dac82424b4a"} Oct 08 06:53:01 crc kubenswrapper[4958]: E1008 06:53:01.066928 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f\\\"\"" pod="openstack/cinder-db-sync-rd4bk" podUID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.072546 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-plzfw" podStartSLOduration=3.880416758 podStartE2EDuration="25.072530402s" podCreationTimestamp="2025-10-08 06:52:36 +0000 UTC" firstStartedPulling="2025-10-08 06:52:37.270020196 +0000 UTC m=+1100.399712797" lastFinishedPulling="2025-10-08 06:52:58.46213383 +0000 UTC m=+1121.591826441" observedRunningTime="2025-10-08 06:53:01.065286396 +0000 UTC m=+1124.194978997" watchObservedRunningTime="2025-10-08 06:53:01.072530402 +0000 UTC m=+1124.202223003" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.113401 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h7mqh" podStartSLOduration=2.163946583 podStartE2EDuration="25.113385995s" podCreationTimestamp="2025-10-08 06:52:36 +0000 UTC" firstStartedPulling="2025-10-08 06:52:37.283772097 +0000 UTC m=+1100.413464698" lastFinishedPulling="2025-10-08 06:53:00.233211509 +0000 UTC m=+1123.362904110" observedRunningTime="2025-10-08 06:53:01.110576669 +0000 UTC m=+1124.240269270" watchObservedRunningTime="2025-10-08 06:53:01.113385995 +0000 UTC m=+1124.243078596" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.123289 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-b72j7"] Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.144760 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-szq25"] Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.165344 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:01 crc kubenswrapper[4958]: W1008 06:53:01.170089 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49e0437c_88d8_4fd7_8349_c49fc431071b.slice/crio-e3de532d51270b12507ddaa9c3d23d3eaa7451af6dbfe6aca7858ffd0dda1a3f WatchSource:0}: Error finding container e3de532d51270b12507ddaa9c3d23d3eaa7451af6dbfe6aca7858ffd0dda1a3f: Status 404 returned error can't find the container with id e3de532d51270b12507ddaa9c3d23d3eaa7451af6dbfe6aca7858ffd0dda1a3f Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.187075 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-845b57c9c7-mn8f6"] Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.188788 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.193603 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-845b57c9c7-mn8f6"] Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.198864 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.199099 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.239851 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59fbcb7b56-6l4nd"] Oct 08 06:53:01 crc kubenswrapper[4958]: W1008 06:53:01.330766 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb18904a_8657_440d_ad39_60d3bb7907c3.slice/crio-2fee8e0914ab36e7c347281f5b6c1bc65920db980326ca75145908beaf0cd4e8 WatchSource:0}: Error finding container 2fee8e0914ab36e7c347281f5b6c1bc65920db980326ca75145908beaf0cd4e8: Status 404 returned error can't find the container with id 2fee8e0914ab36e7c347281f5b6c1bc65920db980326ca75145908beaf0cd4e8 Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-combined-ca-bundle\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341519 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-internal-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341550 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-httpd-config\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341594 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-ovndb-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341643 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-config\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-public-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.341693 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xxf\" (UniqueName: \"kubernetes.io/projected/a720c647-fed0-4c66-83ed-ab4c03fc68ba-kube-api-access-v4xxf\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.443430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-internal-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.443786 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-httpd-config\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.443839 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-ovndb-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.443886 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-config\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.443909 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-public-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.443930 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xxf\" (UniqueName: \"kubernetes.io/projected/a720c647-fed0-4c66-83ed-ab4c03fc68ba-kube-api-access-v4xxf\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.447283 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-combined-ca-bundle\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.448374 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-internal-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.454578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-public-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.461496 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-config\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.461543 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-ovndb-tls-certs\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.462409 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-httpd-config\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.465031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xxf\" (UniqueName: \"kubernetes.io/projected/a720c647-fed0-4c66-83ed-ab4c03fc68ba-kube-api-access-v4xxf\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.476804 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-combined-ca-bundle\") pod \"neutron-845b57c9c7-mn8f6\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.527912 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.656731 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:01 crc kubenswrapper[4958]: I1008 06:53:01.660365 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d96c67b5-vhhg5" podUID="b7afab53-9ded-43aa-b3ee-cbd88cfb8c1d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.107458 4958 generic.go:334] "Generic (PLEG): container finished" podID="484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" containerID="32865b682afefc99acaf2a94aaeb014f9a76945d17900abfa265c8b8473f5083" exitCode=0 Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.107942 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" event={"ID":"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31","Type":"ContainerDied","Data":"32865b682afefc99acaf2a94aaeb014f9a76945d17900abfa265c8b8473f5083"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.108427 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-845b57c9c7-mn8f6"] Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.112542 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49e0437c-88d8-4fd7-8349-c49fc431071b","Type":"ContainerStarted","Data":"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.112580 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49e0437c-88d8-4fd7-8349-c49fc431071b","Type":"ContainerStarted","Data":"e3de532d51270b12507ddaa9c3d23d3eaa7451af6dbfe6aca7858ffd0dda1a3f"} Oct 08 06:53:02 crc kubenswrapper[4958]: W1008 06:53:02.113542 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda720c647_fed0_4c66_83ed_ab4c03fc68ba.slice/crio-1e3cc1e6b8f302d7abda1b12847225d2aaa807e0e2e744d0b4ccb0d2f3675385 WatchSource:0}: Error finding container 1e3cc1e6b8f302d7abda1b12847225d2aaa807e0e2e744d0b4ccb0d2f3675385: Status 404 returned error can't find the container with id 1e3cc1e6b8f302d7abda1b12847225d2aaa807e0e2e744d0b4ccb0d2f3675385 Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.114874 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69","Type":"ContainerStarted","Data":"bb0470707e2574aae057d35b508fc392d26ab9f0833629c5606f151a013c0ed4"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.123261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fbcb7b56-6l4nd" event={"ID":"db18904a-8657-440d-ad39-60d3bb7907c3","Type":"ContainerStarted","Data":"7ae853c3c14d48e5ea5ea57fa011c04cf97badf862157908056c61281c0f50a9"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.123314 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fbcb7b56-6l4nd" event={"ID":"db18904a-8657-440d-ad39-60d3bb7907c3","Type":"ContainerStarted","Data":"d0cc28d4790c4c0860f14e72ecd3d5a93c02b076a73ffb7554b804dd7bcb1b95"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.123324 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fbcb7b56-6l4nd" event={"ID":"db18904a-8657-440d-ad39-60d3bb7907c3","Type":"ContainerStarted","Data":"2fee8e0914ab36e7c347281f5b6c1bc65920db980326ca75145908beaf0cd4e8"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.129031 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.132487 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-szq25" event={"ID":"ae41a8dc-cd5b-4b78-a03f-557d76949983","Type":"ContainerStarted","Data":"e59c531bcd48e98457037c11f2f6e70713cbace1cbc0c76772bbc6e06ba70b54"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.132505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-szq25" event={"ID":"ae41a8dc-cd5b-4b78-a03f-557d76949983","Type":"ContainerStarted","Data":"5bfa1ae32123f737e7e1f947c1e26730b784aaf43824c047dd4899305d399dc5"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.141995 4958 generic.go:334] "Generic (PLEG): container finished" podID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerID="84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27" exitCode=0 Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.146789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" event={"ID":"189707bd-c3af-4922-bbf0-75c3bc5f4ad0","Type":"ContainerDied","Data":"84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.146842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" event={"ID":"189707bd-c3af-4922-bbf0-75c3bc5f4ad0","Type":"ContainerStarted","Data":"92edb415bd6f035b16961c3d5773f0dd855ebb95b5487cfaee0796025274eb99"} Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.153906 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59fbcb7b56-6l4nd" podStartSLOduration=3.153882327 podStartE2EDuration="3.153882327s" podCreationTimestamp="2025-10-08 06:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:02.146543469 +0000 UTC m=+1125.276236070" watchObservedRunningTime="2025-10-08 06:53:02.153882327 +0000 UTC m=+1125.283574928" Oct 08 06:53:02 crc kubenswrapper[4958]: I1008 06:53:02.189752 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-szq25" podStartSLOduration=13.189735634 podStartE2EDuration="13.189735634s" podCreationTimestamp="2025-10-08 06:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:02.186432295 +0000 UTC m=+1125.316124896" watchObservedRunningTime="2025-10-08 06:53:02.189735634 +0000 UTC m=+1125.319428235" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.153392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49e0437c-88d8-4fd7-8349-c49fc431071b","Type":"ContainerStarted","Data":"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d"} Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.154068 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-log" containerID="cri-o://c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93" gracePeriod=30 Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.154495 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-httpd" containerID="cri-o://0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d" gracePeriod=30 Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.163939 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-845b57c9c7-mn8f6" event={"ID":"a720c647-fed0-4c66-83ed-ab4c03fc68ba","Type":"ContainerStarted","Data":"84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c"} Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.163988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-845b57c9c7-mn8f6" event={"ID":"a720c647-fed0-4c66-83ed-ab4c03fc68ba","Type":"ContainerStarted","Data":"1e3cc1e6b8f302d7abda1b12847225d2aaa807e0e2e744d0b4ccb0d2f3675385"} Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.185723 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69","Type":"ContainerStarted","Data":"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9"} Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.195199 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" event={"ID":"189707bd-c3af-4922-bbf0-75c3bc5f4ad0","Type":"ContainerStarted","Data":"59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0"} Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.195334 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.198022 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" event={"ID":"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31","Type":"ContainerDied","Data":"0f439824422790078be9de7effd14a2db12233c53cd420b816e17dac82424b4a"} Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.198057 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f439824422790078be9de7effd14a2db12233c53cd420b816e17dac82424b4a" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.226862 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.226836295 podStartE2EDuration="19.226836295s" podCreationTimestamp="2025-10-08 06:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:03.185013486 +0000 UTC m=+1126.314706087" watchObservedRunningTime="2025-10-08 06:53:03.226836295 +0000 UTC m=+1126.356528896" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.232974 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" podStartSLOduration=4.23293126 podStartE2EDuration="4.23293126s" podCreationTimestamp="2025-10-08 06:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:03.219358363 +0000 UTC m=+1126.349050964" watchObservedRunningTime="2025-10-08 06:53:03.23293126 +0000 UTC m=+1126.362623861" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.261711 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.390332 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-sb\") pod \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.390441 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-config\") pod \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.390491 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-swift-storage-0\") pod \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.390527 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-svc\") pod \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.390558 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhnj6\" (UniqueName: \"kubernetes.io/projected/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-kube-api-access-lhnj6\") pod \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.390644 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-nb\") pod \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\" (UID: \"484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.404115 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-kube-api-access-lhnj6" (OuterVolumeSpecName: "kube-api-access-lhnj6") pod "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" (UID: "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31"). InnerVolumeSpecName "kube-api-access-lhnj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.418487 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-config" (OuterVolumeSpecName: "config") pod "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" (UID: "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.420388 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" (UID: "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.421529 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" (UID: "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.421735 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" (UID: "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.429431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" (UID: "484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.492777 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.492804 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.492814 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.492822 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhnj6\" (UniqueName: \"kubernetes.io/projected/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-kube-api-access-lhnj6\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.492831 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.492839 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.831410 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903419 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-scripts\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903495 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903541 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd5vf\" (UniqueName: \"kubernetes.io/projected/49e0437c-88d8-4fd7-8349-c49fc431071b-kube-api-access-vd5vf\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-config-data\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-logs\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-combined-ca-bundle\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.903853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-httpd-run\") pod \"49e0437c-88d8-4fd7-8349-c49fc431071b\" (UID: \"49e0437c-88d8-4fd7-8349-c49fc431071b\") " Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.904771 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.905526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-logs" (OuterVolumeSpecName: "logs") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.912118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e0437c-88d8-4fd7-8349-c49fc431071b-kube-api-access-vd5vf" (OuterVolumeSpecName: "kube-api-access-vd5vf") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "kube-api-access-vd5vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.924063 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-scripts" (OuterVolumeSpecName: "scripts") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.928079 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.945624 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:03 crc kubenswrapper[4958]: I1008 06:53:03.952394 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-config-data" (OuterVolumeSpecName: "config-data") pod "49e0437c-88d8-4fd7-8349-c49fc431071b" (UID: "49e0437c-88d8-4fd7-8349-c49fc431071b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.006912 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.006939 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd5vf\" (UniqueName: \"kubernetes.io/projected/49e0437c-88d8-4fd7-8349-c49fc431071b-kube-api-access-vd5vf\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.006966 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.006976 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.007347 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.008258 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49e0437c-88d8-4fd7-8349-c49fc431071b-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.008274 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e0437c-88d8-4fd7-8349-c49fc431071b-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.025512 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.109585 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.206957 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69","Type":"ContainerStarted","Data":"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.207100 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-log" containerID="cri-o://73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9" gracePeriod=30 Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.207550 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-httpd" containerID="cri-o://080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05" gracePeriod=30 Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.211065 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-845b57c9c7-mn8f6" event={"ID":"a720c647-fed0-4c66-83ed-ab4c03fc68ba","Type":"ContainerStarted","Data":"e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.211349 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.212812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerStarted","Data":"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.214281 4958 generic.go:334] "Generic (PLEG): container finished" podID="e19fed95-a8e9-4e26-b7b8-d6d2f105578a" containerID="4312838554efe193aa4b07d7c2a6ac19aed00f4fb393b9586861c3822e7c76fc" exitCode=0 Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.214401 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plzfw" event={"ID":"e19fed95-a8e9-4e26-b7b8-d6d2f105578a","Type":"ContainerDied","Data":"4312838554efe193aa4b07d7c2a6ac19aed00f4fb393b9586861c3822e7c76fc"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.227833 4958 generic.go:334] "Generic (PLEG): container finished" podID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerID="0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d" exitCode=0 Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228086 4958 generic.go:334] "Generic (PLEG): container finished" podID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerID="c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93" exitCode=143 Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228148 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc68bd5-2kqhl" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228000 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49e0437c-88d8-4fd7-8349-c49fc431071b","Type":"ContainerDied","Data":"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228868 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49e0437c-88d8-4fd7-8349-c49fc431071b","Type":"ContainerDied","Data":"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49e0437c-88d8-4fd7-8349-c49fc431071b","Type":"ContainerDied","Data":"e3de532d51270b12507ddaa9c3d23d3eaa7451af6dbfe6aca7858ffd0dda1a3f"} Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.228914 4958 scope.go:117] "RemoveContainer" containerID="0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.283928 4958 scope.go:117] "RemoveContainer" containerID="c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.308343 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.308322463 podStartE2EDuration="20.308322463s" podCreationTimestamp="2025-10-08 06:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:04.2504364 +0000 UTC m=+1127.380129001" watchObservedRunningTime="2025-10-08 06:53:04.308322463 +0000 UTC m=+1127.438015064" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.321105 4958 scope.go:117] "RemoveContainer" containerID="0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d" Oct 08 06:53:04 crc kubenswrapper[4958]: E1008 06:53:04.321672 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d\": container with ID starting with 0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d not found: ID does not exist" containerID="0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.321717 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d"} err="failed to get container status \"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d\": rpc error: code = NotFound desc = could not find container \"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d\": container with ID starting with 0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d not found: ID does not exist" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.321741 4958 scope.go:117] "RemoveContainer" containerID="c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93" Oct 08 06:53:04 crc kubenswrapper[4958]: E1008 06:53:04.322019 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93\": container with ID starting with c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93 not found: ID does not exist" containerID="c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.322043 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93"} err="failed to get container status \"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93\": rpc error: code = NotFound desc = could not find container \"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93\": container with ID starting with c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93 not found: ID does not exist" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.322056 4958 scope.go:117] "RemoveContainer" containerID="0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.322226 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d"} err="failed to get container status \"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d\": rpc error: code = NotFound desc = could not find container \"0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d\": container with ID starting with 0eb39479c4e91fb04e62aad11a55e433f63fb7dba297c85cc66de1b4eb04ba4d not found: ID does not exist" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.322240 4958 scope.go:117] "RemoveContainer" containerID="c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.325514 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93"} err="failed to get container status \"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93\": rpc error: code = NotFound desc = could not find container \"c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93\": container with ID starting with c4399a2e5a64eb95b05489c220a1bf8976e87321e580753f50e3c28459c9ec93 not found: ID does not exist" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.366742 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.379970 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.393434 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:04 crc kubenswrapper[4958]: E1008 06:53:04.393845 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-httpd" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.393865 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-httpd" Oct 08 06:53:04 crc kubenswrapper[4958]: E1008 06:53:04.393881 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-log" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.393887 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-log" Oct 08 06:53:04 crc kubenswrapper[4958]: E1008 06:53:04.393901 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" containerName="init" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.393907 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" containerName="init" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.394101 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-httpd" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.394117 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" containerName="glance-log" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.394133 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" containerName="init" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.395028 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.395636 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.396237 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-845b57c9c7-mn8f6" podStartSLOduration=3.396220815 podStartE2EDuration="3.396220815s" podCreationTimestamp="2025-10-08 06:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:04.322027883 +0000 UTC m=+1127.451720494" watchObservedRunningTime="2025-10-08 06:53:04.396220815 +0000 UTC m=+1127.525913406" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.396926 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.398323 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.429113 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-2kqhl"] Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.435903 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc68bd5-2kqhl"] Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520446 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwt6v\" (UniqueName: \"kubernetes.io/projected/f10a7792-91b1-4de9-83de-17620cd80909-kube-api-access-rwt6v\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520493 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-config-data\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520634 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-scripts\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520710 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-logs\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.520759 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.621700 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.621753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.621779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-scripts\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.622158 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.622575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.622695 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.622721 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-logs\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.623052 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.623189 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwt6v\" (UniqueName: \"kubernetes.io/projected/f10a7792-91b1-4de9-83de-17620cd80909-kube-api-access-rwt6v\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.623200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-logs\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.623236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-config-data\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.627092 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.627144 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-scripts\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.633187 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.635732 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-config-data\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.640583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwt6v\" (UniqueName: \"kubernetes.io/projected/f10a7792-91b1-4de9-83de-17620cd80909-kube-api-access-rwt6v\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.658099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " pod="openstack/glance-default-external-api-0" Oct 08 06:53:04 crc kubenswrapper[4958]: I1008 06:53:04.708751 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.198618 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.241162 4958 generic.go:334] "Generic (PLEG): container finished" podID="530f954e-7750-4f47-896a-7568c626c8ac" containerID="f34eb08efa67b475f0df3b642b635f5ede09883090a37f940dd4419c81dbeb41" exitCode=0 Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.241226 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7mqh" event={"ID":"530f954e-7750-4f47-896a-7568c626c8ac","Type":"ContainerDied","Data":"f34eb08efa67b475f0df3b642b635f5ede09883090a37f940dd4419c81dbeb41"} Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.263053 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.265957 4958 generic.go:334] "Generic (PLEG): container finished" podID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerID="080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05" exitCode=0 Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.265987 4958 generic.go:334] "Generic (PLEG): container finished" podID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerID="73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9" exitCode=143 Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.266018 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.266059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69","Type":"ContainerDied","Data":"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05"} Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.266175 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69","Type":"ContainerDied","Data":"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9"} Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.266227 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69","Type":"ContainerDied","Data":"bb0470707e2574aae057d35b508fc392d26ab9f0833629c5606f151a013c0ed4"} Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.266253 4958 scope.go:117] "RemoveContainer" containerID="080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.267804 4958 generic.go:334] "Generic (PLEG): container finished" podID="ae41a8dc-cd5b-4b78-a03f-557d76949983" containerID="e59c531bcd48e98457037c11f2f6e70713cbace1cbc0c76772bbc6e06ba70b54" exitCode=0 Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.268845 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-szq25" event={"ID":"ae41a8dc-cd5b-4b78-a03f-557d76949983","Type":"ContainerDied","Data":"e59c531bcd48e98457037c11f2f6e70713cbace1cbc0c76772bbc6e06ba70b54"} Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.297404 4958 scope.go:117] "RemoveContainer" containerID="73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.338472 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-scripts\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.338561 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-httpd-run\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.338623 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-combined-ca-bundle\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.338649 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj6gd\" (UniqueName: \"kubernetes.io/projected/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-kube-api-access-bj6gd\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.338921 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.338992 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-logs\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.339010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-config-data\") pod \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\" (UID: \"11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.339314 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.339846 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.340911 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-logs" (OuterVolumeSpecName: "logs") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.344916 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-kube-api-access-bj6gd" (OuterVolumeSpecName: "kube-api-access-bj6gd") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "kube-api-access-bj6gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.345121 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-scripts" (OuterVolumeSpecName: "scripts") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.353025 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.353078 4958 scope.go:117] "RemoveContainer" containerID="080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05" Oct 08 06:53:05 crc kubenswrapper[4958]: E1008 06:53:05.353827 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05\": container with ID starting with 080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05 not found: ID does not exist" containerID="080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.353864 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05"} err="failed to get container status \"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05\": rpc error: code = NotFound desc = could not find container \"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05\": container with ID starting with 080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05 not found: ID does not exist" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.353889 4958 scope.go:117] "RemoveContainer" containerID="73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9" Oct 08 06:53:05 crc kubenswrapper[4958]: E1008 06:53:05.354347 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9\": container with ID starting with 73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9 not found: ID does not exist" containerID="73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.354377 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9"} err="failed to get container status \"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9\": rpc error: code = NotFound desc = could not find container \"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9\": container with ID starting with 73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9 not found: ID does not exist" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.354403 4958 scope.go:117] "RemoveContainer" containerID="080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.354637 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05"} err="failed to get container status \"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05\": rpc error: code = NotFound desc = could not find container \"080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05\": container with ID starting with 080a18989678b87c35d144f02b71f2bd5a30864794b8662e48c003f969ab6f05 not found: ID does not exist" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.354650 4958 scope.go:117] "RemoveContainer" containerID="73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.354806 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9"} err="failed to get container status \"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9\": rpc error: code = NotFound desc = could not find container \"73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9\": container with ID starting with 73e58fc8f3c440bd2f06650186f103ec1d1a4f6cbcd6cbd0e9dc6bbfd352f4f9 not found: ID does not exist" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.373601 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.407841 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-config-data" (OuterVolumeSpecName: "config-data") pod "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" (UID: "11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.442269 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.442309 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.442323 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj6gd\" (UniqueName: \"kubernetes.io/projected/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-kube-api-access-bj6gd\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.442350 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.442360 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.442373 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.460559 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.543515 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.592430 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31" path="/var/lib/kubelet/pods/484a3bc5-5d0f-4bc9-86d4-32fbe5b15a31/volumes" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.601265 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e0437c-88d8-4fd7-8349-c49fc431071b" path="/var/lib/kubelet/pods/49e0437c-88d8-4fd7-8349-c49fc431071b/volumes" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.642310 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plzfw" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.676648 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.719835 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737083 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:05 crc kubenswrapper[4958]: E1008 06:53:05.737496 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-httpd" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737516 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-httpd" Oct 08 06:53:05 crc kubenswrapper[4958]: E1008 06:53:05.737532 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19fed95-a8e9-4e26-b7b8-d6d2f105578a" containerName="placement-db-sync" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737538 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19fed95-a8e9-4e26-b7b8-d6d2f105578a" containerName="placement-db-sync" Oct 08 06:53:05 crc kubenswrapper[4958]: E1008 06:53:05.737560 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-log" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737566 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-log" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737724 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19fed95-a8e9-4e26-b7b8-d6d2f105578a" containerName="placement-db-sync" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737751 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-httpd" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.737763 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" containerName="glance-log" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.738893 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.741744 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.741821 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.745667 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-logs\") pod \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.745706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-combined-ca-bundle\") pod \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.745834 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gf72\" (UniqueName: \"kubernetes.io/projected/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-kube-api-access-6gf72\") pod \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.745869 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-config-data\") pod \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.745893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-scripts\") pod \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\" (UID: \"e19fed95-a8e9-4e26-b7b8-d6d2f105578a\") " Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.748638 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-logs" (OuterVolumeSpecName: "logs") pod "e19fed95-a8e9-4e26-b7b8-d6d2f105578a" (UID: "e19fed95-a8e9-4e26-b7b8-d6d2f105578a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.757874 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-scripts" (OuterVolumeSpecName: "scripts") pod "e19fed95-a8e9-4e26-b7b8-d6d2f105578a" (UID: "e19fed95-a8e9-4e26-b7b8-d6d2f105578a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.763998 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.770989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-config-data" (OuterVolumeSpecName: "config-data") pod "e19fed95-a8e9-4e26-b7b8-d6d2f105578a" (UID: "e19fed95-a8e9-4e26-b7b8-d6d2f105578a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.772213 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-kube-api-access-6gf72" (OuterVolumeSpecName: "kube-api-access-6gf72") pod "e19fed95-a8e9-4e26-b7b8-d6d2f105578a" (UID: "e19fed95-a8e9-4e26-b7b8-d6d2f105578a"). InnerVolumeSpecName "kube-api-access-6gf72". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.782006 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e19fed95-a8e9-4e26-b7b8-d6d2f105578a" (UID: "e19fed95-a8e9-4e26-b7b8-d6d2f105578a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.847657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-logs\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.847693 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.847713 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xznwb\" (UniqueName: \"kubernetes.io/projected/3134ca44-2587-4cc5-8931-fbd4f9129411-kube-api-access-xznwb\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.847827 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.847997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848251 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848305 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848473 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gf72\" (UniqueName: \"kubernetes.io/projected/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-kube-api-access-6gf72\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848492 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848503 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848512 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.848523 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19fed95-a8e9-4e26-b7b8-d6d2f105578a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-logs\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950407 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xznwb\" (UniqueName: \"kubernetes.io/projected/3134ca44-2587-4cc5-8931-fbd4f9129411-kube-api-access-xznwb\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950516 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950565 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.950583 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.951204 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.951350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.952018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-logs\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.957874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.958689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.958725 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.959628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.967660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xznwb\" (UniqueName: \"kubernetes.io/projected/3134ca44-2587-4cc5-8931-fbd4f9129411-kube-api-access-xznwb\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:05 crc kubenswrapper[4958]: I1008 06:53:05.982531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.131701 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.292372 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-plzfw" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.292370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-plzfw" event={"ID":"e19fed95-a8e9-4e26-b7b8-d6d2f105578a","Type":"ContainerDied","Data":"d52c8715f692ee0837ff6fad4b3c27f6af2b65a053dbf02dbfd29de4db5a237f"} Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.295688 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d52c8715f692ee0837ff6fad4b3c27f6af2b65a053dbf02dbfd29de4db5a237f" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.318497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f10a7792-91b1-4de9-83de-17620cd80909","Type":"ContainerStarted","Data":"eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1"} Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.318531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f10a7792-91b1-4de9-83de-17620cd80909","Type":"ContainerStarted","Data":"2781cdb8909020985403459f3bcb61d17d2be7c2704754fa891857633e804e50"} Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.444737 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-687454697b-jdsn4"] Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.446565 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.454978 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.455129 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pt74v" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.455276 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.455474 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.455593 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.472864 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-687454697b-jdsn4"] Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-config-data\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561375 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-internal-tls-certs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561404 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-combined-ca-bundle\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561447 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e48003-108c-4de3-be7e-81946556e25e-logs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561481 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-public-tls-certs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561505 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmnx\" (UniqueName: \"kubernetes.io/projected/70e48003-108c-4de3-be7e-81946556e25e-kube-api-access-jkmnx\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.561590 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-scripts\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.662694 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e48003-108c-4de3-be7e-81946556e25e-logs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.662901 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-public-tls-certs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.662997 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkmnx\" (UniqueName: \"kubernetes.io/projected/70e48003-108c-4de3-be7e-81946556e25e-kube-api-access-jkmnx\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.663027 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e48003-108c-4de3-be7e-81946556e25e-logs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.663247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-scripts\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.663371 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-config-data\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.663394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-internal-tls-certs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.663418 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-combined-ca-bundle\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.670558 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-internal-tls-certs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.684419 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-public-tls-certs\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.686121 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkmnx\" (UniqueName: \"kubernetes.io/projected/70e48003-108c-4de3-be7e-81946556e25e-kube-api-access-jkmnx\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.686570 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-combined-ca-bundle\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.689806 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-config-data\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.690415 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.691418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-scripts\") pod \"placement-687454697b-jdsn4\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:06 crc kubenswrapper[4958]: I1008 06:53:06.827099 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:07 crc kubenswrapper[4958]: I1008 06:53:07.328998 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f10a7792-91b1-4de9-83de-17620cd80909","Type":"ContainerStarted","Data":"a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df"} Oct 08 06:53:07 crc kubenswrapper[4958]: I1008 06:53:07.360356 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.360332703 podStartE2EDuration="3.360332703s" podCreationTimestamp="2025-10-08 06:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:07.350894408 +0000 UTC m=+1130.480587029" watchObservedRunningTime="2025-10-08 06:53:07.360332703 +0000 UTC m=+1130.490025304" Oct 08 06:53:07 crc kubenswrapper[4958]: I1008 06:53:07.604364 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69" path="/var/lib/kubelet/pods/11fc6c2e-2c0b-4a74-9ffe-70d2617d8c69/volumes" Oct 08 06:53:08 crc kubenswrapper[4958]: W1008 06:53:08.049894 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3134ca44_2587_4cc5_8931_fbd4f9129411.slice/crio-492b3765871fd7f6fb4a76c63146710aa5c533e8ba1c90f6d73eda15a73670f3 WatchSource:0}: Error finding container 492b3765871fd7f6fb4a76c63146710aa5c533e8ba1c90f6d73eda15a73670f3: Status 404 returned error can't find the container with id 492b3765871fd7f6fb4a76c63146710aa5c533e8ba1c90f6d73eda15a73670f3 Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.151622 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-szq25" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.189377 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqwpx\" (UniqueName: \"kubernetes.io/projected/ae41a8dc-cd5b-4b78-a03f-557d76949983-kube-api-access-lqwpx\") pod \"ae41a8dc-cd5b-4b78-a03f-557d76949983\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.189637 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-scripts\") pod \"ae41a8dc-cd5b-4b78-a03f-557d76949983\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.189701 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-credential-keys\") pod \"ae41a8dc-cd5b-4b78-a03f-557d76949983\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.189722 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-fernet-keys\") pod \"ae41a8dc-cd5b-4b78-a03f-557d76949983\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.189738 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-config-data\") pod \"ae41a8dc-cd5b-4b78-a03f-557d76949983\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.189766 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-combined-ca-bundle\") pod \"ae41a8dc-cd5b-4b78-a03f-557d76949983\" (UID: \"ae41a8dc-cd5b-4b78-a03f-557d76949983\") " Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.197440 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ae41a8dc-cd5b-4b78-a03f-557d76949983" (UID: "ae41a8dc-cd5b-4b78-a03f-557d76949983"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.197484 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ae41a8dc-cd5b-4b78-a03f-557d76949983" (UID: "ae41a8dc-cd5b-4b78-a03f-557d76949983"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.197511 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae41a8dc-cd5b-4b78-a03f-557d76949983-kube-api-access-lqwpx" (OuterVolumeSpecName: "kube-api-access-lqwpx") pod "ae41a8dc-cd5b-4b78-a03f-557d76949983" (UID: "ae41a8dc-cd5b-4b78-a03f-557d76949983"). InnerVolumeSpecName "kube-api-access-lqwpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.203290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-scripts" (OuterVolumeSpecName: "scripts") pod "ae41a8dc-cd5b-4b78-a03f-557d76949983" (UID: "ae41a8dc-cd5b-4b78-a03f-557d76949983"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.220120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae41a8dc-cd5b-4b78-a03f-557d76949983" (UID: "ae41a8dc-cd5b-4b78-a03f-557d76949983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.229088 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-config-data" (OuterVolumeSpecName: "config-data") pod "ae41a8dc-cd5b-4b78-a03f-557d76949983" (UID: "ae41a8dc-cd5b-4b78-a03f-557d76949983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.292055 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.292111 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.292131 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.292150 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.292170 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqwpx\" (UniqueName: \"kubernetes.io/projected/ae41a8dc-cd5b-4b78-a03f-557d76949983-kube-api-access-lqwpx\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.292192 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae41a8dc-cd5b-4b78-a03f-557d76949983-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.341498 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-szq25" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.343210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-szq25" event={"ID":"ae41a8dc-cd5b-4b78-a03f-557d76949983","Type":"ContainerDied","Data":"5bfa1ae32123f737e7e1f947c1e26730b784aaf43824c047dd4899305d399dc5"} Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.343253 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bfa1ae32123f737e7e1f947c1e26730b784aaf43824c047dd4899305d399dc5" Oct 08 06:53:08 crc kubenswrapper[4958]: I1008 06:53:08.346065 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3134ca44-2587-4cc5-8931-fbd4f9129411","Type":"ContainerStarted","Data":"492b3765871fd7f6fb4a76c63146710aa5c533e8ba1c90f6d73eda15a73670f3"} Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.247047 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56b6f8956c-65c6t"] Oct 08 06:53:09 crc kubenswrapper[4958]: E1008 06:53:09.247380 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae41a8dc-cd5b-4b78-a03f-557d76949983" containerName="keystone-bootstrap" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.247393 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae41a8dc-cd5b-4b78-a03f-557d76949983" containerName="keystone-bootstrap" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.247572 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae41a8dc-cd5b-4b78-a03f-557d76949983" containerName="keystone-bootstrap" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.248141 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.250789 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.251294 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fhwhd" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.251488 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.251609 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.251747 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.251765 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.267537 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56b6f8956c-65c6t"] Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.309573 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-scripts\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.309741 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-fernet-keys\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.309849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-public-tls-certs\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.309893 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74gmw\" (UniqueName: \"kubernetes.io/projected/46f633ea-236e-46e7-a780-a9912bbd2c91-kube-api-access-74gmw\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.309941 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-internal-tls-certs\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.309994 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-config-data\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.310129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-credential-keys\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.310225 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-combined-ca-bundle\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74gmw\" (UniqueName: \"kubernetes.io/projected/46f633ea-236e-46e7-a780-a9912bbd2c91-kube-api-access-74gmw\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411664 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-public-tls-certs\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411691 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-internal-tls-certs\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-config-data\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411739 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-credential-keys\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411776 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-combined-ca-bundle\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411802 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-scripts\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.411878 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-fernet-keys\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.418026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-public-tls-certs\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.419123 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-scripts\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.422204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-combined-ca-bundle\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.422617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-internal-tls-certs\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.429697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74gmw\" (UniqueName: \"kubernetes.io/projected/46f633ea-236e-46e7-a780-a9912bbd2c91-kube-api-access-74gmw\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.431216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-credential-keys\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.431518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-fernet-keys\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.444852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-config-data\") pod \"keystone-56b6f8956c-65c6t\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.564395 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.628737 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.687840 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-c45dc"] Oct 08 06:53:09 crc kubenswrapper[4958]: I1008 06:53:09.688068 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="dnsmasq-dns" containerID="cri-o://47c22f0c3aadfa42bf6f78218f5d4c8eb40a2ebb79bdf78880418e2c33cb5304" gracePeriod=10 Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.275833 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.339782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-combined-ca-bundle\") pod \"530f954e-7750-4f47-896a-7568c626c8ac\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.339835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4gjp\" (UniqueName: \"kubernetes.io/projected/530f954e-7750-4f47-896a-7568c626c8ac-kube-api-access-z4gjp\") pod \"530f954e-7750-4f47-896a-7568c626c8ac\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.339913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-db-sync-config-data\") pod \"530f954e-7750-4f47-896a-7568c626c8ac\" (UID: \"530f954e-7750-4f47-896a-7568c626c8ac\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.344042 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530f954e-7750-4f47-896a-7568c626c8ac-kube-api-access-z4gjp" (OuterVolumeSpecName: "kube-api-access-z4gjp") pod "530f954e-7750-4f47-896a-7568c626c8ac" (UID: "530f954e-7750-4f47-896a-7568c626c8ac"). InnerVolumeSpecName "kube-api-access-z4gjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.355801 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "530f954e-7750-4f47-896a-7568c626c8ac" (UID: "530f954e-7750-4f47-896a-7568c626c8ac"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.383462 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "530f954e-7750-4f47-896a-7568c626c8ac" (UID: "530f954e-7750-4f47-896a-7568c626c8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.401982 4958 generic.go:334] "Generic (PLEG): container finished" podID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerID="47c22f0c3aadfa42bf6f78218f5d4c8eb40a2ebb79bdf78880418e2c33cb5304" exitCode=0 Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.402076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" event={"ID":"aecc5d72-3109-4787-aa1e-522e1bb7dda9","Type":"ContainerDied","Data":"47c22f0c3aadfa42bf6f78218f5d4c8eb40a2ebb79bdf78880418e2c33cb5304"} Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.406447 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7mqh" event={"ID":"530f954e-7750-4f47-896a-7568c626c8ac","Type":"ContainerDied","Data":"de6655ff6585800600358b4a9f5799002bb16abc99870ba0f80dfe235a74b52a"} Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.406468 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de6655ff6585800600358b4a9f5799002bb16abc99870ba0f80dfe235a74b52a" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.406516 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7mqh" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.444670 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.444708 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4gjp\" (UniqueName: \"kubernetes.io/projected/530f954e-7750-4f47-896a-7568c626c8ac-kube-api-access-z4gjp\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.444720 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/530f954e-7750-4f47-896a-7568c626c8ac-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.608728 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.647928 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-config\") pod \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.647996 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdk8t\" (UniqueName: \"kubernetes.io/projected/aecc5d72-3109-4787-aa1e-522e1bb7dda9-kube-api-access-rdk8t\") pod \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.648257 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-sb\") pod \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.648300 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-dns-svc\") pod \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.648403 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-nb\") pod \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\" (UID: \"aecc5d72-3109-4787-aa1e-522e1bb7dda9\") " Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.660198 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecc5d72-3109-4787-aa1e-522e1bb7dda9-kube-api-access-rdk8t" (OuterVolumeSpecName: "kube-api-access-rdk8t") pod "aecc5d72-3109-4787-aa1e-522e1bb7dda9" (UID: "aecc5d72-3109-4787-aa1e-522e1bb7dda9"). InnerVolumeSpecName "kube-api-access-rdk8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: W1008 06:53:10.672281 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70e48003_108c_4de3_be7e_81946556e25e.slice/crio-10568575f0b58fa5f34d60aeb42834af3e40b9b4a8a3c21387d8e4e4a712b1c9 WatchSource:0}: Error finding container 10568575f0b58fa5f34d60aeb42834af3e40b9b4a8a3c21387d8e4e4a712b1c9: Status 404 returned error can't find the container with id 10568575f0b58fa5f34d60aeb42834af3e40b9b4a8a3c21387d8e4e4a712b1c9 Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.673710 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-687454697b-jdsn4"] Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.703843 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-config" (OuterVolumeSpecName: "config") pod "aecc5d72-3109-4787-aa1e-522e1bb7dda9" (UID: "aecc5d72-3109-4787-aa1e-522e1bb7dda9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.713668 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aecc5d72-3109-4787-aa1e-522e1bb7dda9" (UID: "aecc5d72-3109-4787-aa1e-522e1bb7dda9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.715501 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aecc5d72-3109-4787-aa1e-522e1bb7dda9" (UID: "aecc5d72-3109-4787-aa1e-522e1bb7dda9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.733822 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aecc5d72-3109-4787-aa1e-522e1bb7dda9" (UID: "aecc5d72-3109-4787-aa1e-522e1bb7dda9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.750577 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.750605 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.750617 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.750626 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdk8t\" (UniqueName: \"kubernetes.io/projected/aecc5d72-3109-4787-aa1e-522e1bb7dda9-kube-api-access-rdk8t\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.750635 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc5d72-3109-4787-aa1e-522e1bb7dda9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:10 crc kubenswrapper[4958]: I1008 06:53:10.831973 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56b6f8956c-65c6t"] Oct 08 06:53:10 crc kubenswrapper[4958]: W1008 06:53:10.835161 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f633ea_236e_46e7_a780_a9912bbd2c91.slice/crio-e4268453f23c6216ebb3f49f10b87ff12038deb906ff2bdc77d5ea5e919c1934 WatchSource:0}: Error finding container e4268453f23c6216ebb3f49f10b87ff12038deb906ff2bdc77d5ea5e919c1934: Status 404 returned error can't find the container with id e4268453f23c6216ebb3f49f10b87ff12038deb906ff2bdc77d5ea5e919c1934 Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.415594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3134ca44-2587-4cc5-8931-fbd4f9129411","Type":"ContainerStarted","Data":"fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.417186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-687454697b-jdsn4" event={"ID":"70e48003-108c-4de3-be7e-81946556e25e","Type":"ContainerStarted","Data":"5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.417228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-687454697b-jdsn4" event={"ID":"70e48003-108c-4de3-be7e-81946556e25e","Type":"ContainerStarted","Data":"d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.417237 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-687454697b-jdsn4" event={"ID":"70e48003-108c-4de3-be7e-81946556e25e","Type":"ContainerStarted","Data":"10568575f0b58fa5f34d60aeb42834af3e40b9b4a8a3c21387d8e4e4a712b1c9"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.418188 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.418214 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.423177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerStarted","Data":"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.428498 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" event={"ID":"aecc5d72-3109-4787-aa1e-522e1bb7dda9","Type":"ContainerDied","Data":"b187ce3ef36aef8989346d45c817936c6ef5f6c5647d2a4d44b6b0e662843c03"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.428552 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.428581 4958 scope.go:117] "RemoveContainer" containerID="47c22f0c3aadfa42bf6f78218f5d4c8eb40a2ebb79bdf78880418e2c33cb5304" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.443551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b6f8956c-65c6t" event={"ID":"46f633ea-236e-46e7-a780-a9912bbd2c91","Type":"ContainerStarted","Data":"86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.443598 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b6f8956c-65c6t" event={"ID":"46f633ea-236e-46e7-a780-a9912bbd2c91","Type":"ContainerStarted","Data":"e4268453f23c6216ebb3f49f10b87ff12038deb906ff2bdc77d5ea5e919c1934"} Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.444420 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.462040 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-687454697b-jdsn4" podStartSLOduration=5.462021804 podStartE2EDuration="5.462021804s" podCreationTimestamp="2025-10-08 06:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:11.451801138 +0000 UTC m=+1134.581493739" watchObservedRunningTime="2025-10-08 06:53:11.462021804 +0000 UTC m=+1134.591714405" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.536260 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-56b6f8956c-65c6t" podStartSLOduration=2.5362399570000003 podStartE2EDuration="2.536239957s" podCreationTimestamp="2025-10-08 06:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:11.514001717 +0000 UTC m=+1134.643694318" watchObservedRunningTime="2025-10-08 06:53:11.536239957 +0000 UTC m=+1134.665932558" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.536524 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-c45dc"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.569366 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b587f8db7-c45dc"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.582977 4958 scope.go:117] "RemoveContainer" containerID="b71aeb1391f5793f19265177d9172dab21dae4f05d16682e7701c6efb915d5d6" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.638880 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" path="/var/lib/kubelet/pods/aecc5d72-3109-4787-aa1e-522e1bb7dda9/volumes" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.639696 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6cdccb56ff-v4lgm"] Oct 08 06:53:11 crc kubenswrapper[4958]: E1008 06:53:11.640000 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="dnsmasq-dns" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.640016 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="dnsmasq-dns" Oct 08 06:53:11 crc kubenswrapper[4958]: E1008 06:53:11.640037 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="init" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.640044 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="init" Oct 08 06:53:11 crc kubenswrapper[4958]: E1008 06:53:11.640062 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530f954e-7750-4f47-896a-7568c626c8ac" containerName="barbican-db-sync" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.640068 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="530f954e-7750-4f47-896a-7568c626c8ac" containerName="barbican-db-sync" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.640223 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="dnsmasq-dns" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.640247 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="530f954e-7750-4f47-896a-7568c626c8ac" containerName="barbican-db-sync" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.645225 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c765759f8-jn9mj"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.646406 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.648556 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.649788 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cdccb56ff-v4lgm"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.657295 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cppfl" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.657645 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.657740 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.657826 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.674798 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c765759f8-jn9mj"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.701726 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-n44pc"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.703095 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.768699 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-n44pc"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779494 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data-custom\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-combined-ca-bundle\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779592 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxpsp\" (UniqueName: \"kubernetes.io/projected/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-kube-api-access-nxpsp\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-logs\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779638 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6xnv\" (UniqueName: \"kubernetes.io/projected/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-kube-api-access-w6xnv\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-logs\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779682 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.779706 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-combined-ca-bundle\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780194 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-config\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780292 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780363 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780446 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data-custom\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780643 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-svc\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.780668 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7fb\" (UniqueName: \"kubernetes.io/projected/607fcdde-da07-446c-875b-946ab8ec617e-kube-api-access-4d7fb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.858742 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-67c44c8db8-5fbxd"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.860143 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.862031 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882030 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-combined-ca-bundle\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-config\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882106 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882128 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882153 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882174 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data-custom\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882198 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-svc\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7fb\" (UniqueName: \"kubernetes.io/projected/607fcdde-da07-446c-875b-946ab8ec617e-kube-api-access-4d7fb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882241 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data-custom\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-combined-ca-bundle\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxpsp\" (UniqueName: \"kubernetes.io/projected/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-kube-api-access-nxpsp\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882316 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-logs\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6xnv\" (UniqueName: \"kubernetes.io/projected/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-kube-api-access-w6xnv\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882352 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-logs\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882386 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.882491 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67c44c8db8-5fbxd"] Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.883192 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.884617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.885248 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-config\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.887501 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.890043 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-logs\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.891384 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data-custom\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.891576 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-combined-ca-bundle\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.893698 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-combined-ca-bundle\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.894611 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.898397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-logs\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.898916 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-svc\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.911172 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7fb\" (UniqueName: \"kubernetes.io/projected/607fcdde-da07-446c-875b-946ab8ec617e-kube-api-access-4d7fb\") pod \"dnsmasq-dns-5c78787df7-n44pc\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.912394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data-custom\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.912428 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.915437 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6xnv\" (UniqueName: \"kubernetes.io/projected/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-kube-api-access-w6xnv\") pod \"barbican-worker-6cdccb56ff-v4lgm\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.915679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxpsp\" (UniqueName: \"kubernetes.io/projected/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-kube-api-access-nxpsp\") pod \"barbican-keystone-listener-c765759f8-jn9mj\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.984261 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5740e5-447f-4104-a07d-bc0e7092d962-logs\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.984309 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.984333 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data-custom\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.984383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-combined-ca-bundle\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.984407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvw4\" (UniqueName: \"kubernetes.io/projected/ba5740e5-447f-4104-a07d-bc0e7092d962-kube-api-access-hkvw4\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:11 crc kubenswrapper[4958]: I1008 06:53:11.991458 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.001781 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.054142 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.086849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-combined-ca-bundle\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.086915 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvw4\" (UniqueName: \"kubernetes.io/projected/ba5740e5-447f-4104-a07d-bc0e7092d962-kube-api-access-hkvw4\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.087132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5740e5-447f-4104-a07d-bc0e7092d962-logs\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.087160 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.087199 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data-custom\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.088359 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5740e5-447f-4104-a07d-bc0e7092d962-logs\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.090628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data-custom\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.091491 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.099957 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-combined-ca-bundle\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.112959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvw4\" (UniqueName: \"kubernetes.io/projected/ba5740e5-447f-4104-a07d-bc0e7092d962-kube-api-access-hkvw4\") pod \"barbican-api-67c44c8db8-5fbxd\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.185410 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.467849 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3134ca44-2587-4cc5-8931-fbd4f9129411","Type":"ContainerStarted","Data":"b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77"} Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.494190 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.494167111 podStartE2EDuration="7.494167111s" podCreationTimestamp="2025-10-08 06:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:12.484381087 +0000 UTC m=+1135.614073688" watchObservedRunningTime="2025-10-08 06:53:12.494167111 +0000 UTC m=+1135.623859712" Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.523495 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c765759f8-jn9mj"] Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.658814 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cdccb56ff-v4lgm"] Oct 08 06:53:12 crc kubenswrapper[4958]: W1008 06:53:12.664966 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b4974ca_181d_4e2e_b4c4_0c425f86f0ef.slice/crio-f24ff3efb02a956b2b0b80775acde8aebc8914bd462b7c1ac7311ddbb3d6b15f WatchSource:0}: Error finding container f24ff3efb02a956b2b0b80775acde8aebc8914bd462b7c1ac7311ddbb3d6b15f: Status 404 returned error can't find the container with id f24ff3efb02a956b2b0b80775acde8aebc8914bd462b7c1ac7311ddbb3d6b15f Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.669337 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-n44pc"] Oct 08 06:53:12 crc kubenswrapper[4958]: W1008 06:53:12.669495 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod607fcdde_da07_446c_875b_946ab8ec617e.slice/crio-61820470c263ecf953bb27165da269631474929b13863df64d73782fe3bfb862 WatchSource:0}: Error finding container 61820470c263ecf953bb27165da269631474929b13863df64d73782fe3bfb862: Status 404 returned error can't find the container with id 61820470c263ecf953bb27165da269631474929b13863df64d73782fe3bfb862 Oct 08 06:53:12 crc kubenswrapper[4958]: I1008 06:53:12.796196 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67c44c8db8-5fbxd"] Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.486784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c44c8db8-5fbxd" event={"ID":"ba5740e5-447f-4104-a07d-bc0e7092d962","Type":"ContainerStarted","Data":"b6c7d3a811f444cf71be03e845aacb115c373eec9e132255314c4364810bcbd9"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.487138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c44c8db8-5fbxd" event={"ID":"ba5740e5-447f-4104-a07d-bc0e7092d962","Type":"ContainerStarted","Data":"3fa90dad8a7721798b2697fcc6a430121bab915e5b47d28f33472097cb2d2201"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.487153 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c44c8db8-5fbxd" event={"ID":"ba5740e5-447f-4104-a07d-bc0e7092d962","Type":"ContainerStarted","Data":"69efbabc1dfd267c0d7119fcfe4bd6725af9ecc613b4a6aa42e217099d0d1823"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.487167 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.487177 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.489269 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" event={"ID":"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac","Type":"ContainerStarted","Data":"6ec6f885aec1d149f7260ebd55345ceb13bb6824b8f1a6673777f4e84a2f5fa1"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.490969 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" event={"ID":"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef","Type":"ContainerStarted","Data":"f24ff3efb02a956b2b0b80775acde8aebc8914bd462b7c1ac7311ddbb3d6b15f"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.492497 4958 generic.go:334] "Generic (PLEG): container finished" podID="607fcdde-da07-446c-875b-946ab8ec617e" containerID="2dda4ec602a66c837e0ea09c0e3f61d48133fb86251aa643c29f18244f03beb2" exitCode=0 Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.492549 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" event={"ID":"607fcdde-da07-446c-875b-946ab8ec617e","Type":"ContainerDied","Data":"2dda4ec602a66c837e0ea09c0e3f61d48133fb86251aa643c29f18244f03beb2"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.492574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" event={"ID":"607fcdde-da07-446c-875b-946ab8ec617e","Type":"ContainerStarted","Data":"61820470c263ecf953bb27165da269631474929b13863df64d73782fe3bfb862"} Oct 08 06:53:13 crc kubenswrapper[4958]: I1008 06:53:13.509792 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-67c44c8db8-5fbxd" podStartSLOduration=2.509771291 podStartE2EDuration="2.509771291s" podCreationTimestamp="2025-10-08 06:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:13.507276054 +0000 UTC m=+1136.636968655" watchObservedRunningTime="2025-10-08 06:53:13.509771291 +0000 UTC m=+1136.639463892" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.490061 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cdf478498-ptdth"] Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.491885 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.493810 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.494650 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.502808 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cdf478498-ptdth"] Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.532532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" event={"ID":"607fcdde-da07-446c-875b-946ab8ec617e","Type":"ContainerStarted","Data":"73006da58f9127ee328a44c224b98063089605f5570b8c21d85268a25a690db7"} Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.532606 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.534753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rd4bk" event={"ID":"4f07f416-2847-46dd-b003-1cb2f1a9dda9","Type":"ContainerStarted","Data":"d527a630ea4d61d6eb0f13c8d4c75714d192decdc6d656ce04fc555e290f1155"} Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.544907 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-combined-ca-bundle\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.544993 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.545065 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data-custom\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.545095 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhcn\" (UniqueName: \"kubernetes.io/projected/94872b33-329c-42ca-9d90-09c6950dfd83-kube-api-access-zxhcn\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.545139 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-public-tls-certs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.545167 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-internal-tls-certs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.545205 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94872b33-329c-42ca-9d90-09c6950dfd83-logs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.558594 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" podStartSLOduration=3.5585756870000003 podStartE2EDuration="3.558575687s" podCreationTimestamp="2025-10-08 06:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:14.550084368 +0000 UTC m=+1137.679776969" watchObservedRunningTime="2025-10-08 06:53:14.558575687 +0000 UTC m=+1137.688268288" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.572516 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rd4bk" podStartSLOduration=2.583285429 podStartE2EDuration="38.572503563s" podCreationTimestamp="2025-10-08 06:52:36 +0000 UTC" firstStartedPulling="2025-10-08 06:52:37.015754924 +0000 UTC m=+1100.145447525" lastFinishedPulling="2025-10-08 06:53:13.004973058 +0000 UTC m=+1136.134665659" observedRunningTime="2025-10-08 06:53:14.563221232 +0000 UTC m=+1137.692913833" watchObservedRunningTime="2025-10-08 06:53:14.572503563 +0000 UTC m=+1137.702196164" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.647120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.647233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data-custom\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.647266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhcn\" (UniqueName: \"kubernetes.io/projected/94872b33-329c-42ca-9d90-09c6950dfd83-kube-api-access-zxhcn\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.647325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-public-tls-certs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.647357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-internal-tls-certs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.647410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94872b33-329c-42ca-9d90-09c6950dfd83-logs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.648819 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-combined-ca-bundle\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.648963 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94872b33-329c-42ca-9d90-09c6950dfd83-logs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.654170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.656182 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-combined-ca-bundle\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.656184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-public-tls-certs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.656793 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data-custom\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.669463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-internal-tls-certs\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.674491 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhcn\" (UniqueName: \"kubernetes.io/projected/94872b33-329c-42ca-9d90-09c6950dfd83-kube-api-access-zxhcn\") pod \"barbican-api-5cdf478498-ptdth\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.709444 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.709645 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.750201 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.755274 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 06:53:14 crc kubenswrapper[4958]: I1008 06:53:14.826428 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:15 crc kubenswrapper[4958]: I1008 06:53:15.340255 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b587f8db7-c45dc" podUID="aecc5d72-3109-4787-aa1e-522e1bb7dda9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Oct 08 06:53:15 crc kubenswrapper[4958]: I1008 06:53:15.545506 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 06:53:15 crc kubenswrapper[4958]: I1008 06:53:15.547272 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 06:53:16 crc kubenswrapper[4958]: I1008 06:53:16.132491 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:16 crc kubenswrapper[4958]: I1008 06:53:16.133348 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:16 crc kubenswrapper[4958]: I1008 06:53:16.166919 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:16 crc kubenswrapper[4958]: I1008 06:53:16.173574 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:16 crc kubenswrapper[4958]: I1008 06:53:16.557129 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:16 crc kubenswrapper[4958]: I1008 06:53:16.557218 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:17 crc kubenswrapper[4958]: I1008 06:53:17.359700 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 06:53:17 crc kubenswrapper[4958]: I1008 06:53:17.468483 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 06:53:18 crc kubenswrapper[4958]: I1008 06:53:18.464470 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:18 crc kubenswrapper[4958]: I1008 06:53:18.545794 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 06:53:18 crc kubenswrapper[4958]: I1008 06:53:18.588854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" event={"ID":"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef","Type":"ContainerStarted","Data":"87bccf6b8df0d8973019540e9aeef4bc4a3f9ababf913b986323b73b1f2f4a3b"} Oct 08 06:53:18 crc kubenswrapper[4958]: I1008 06:53:18.862868 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.355122 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.567876 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cdf478498-ptdth"] Oct 08 06:53:20 crc kubenswrapper[4958]: W1008 06:53:20.574336 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94872b33_329c_42ca_9d90_09c6950dfd83.slice/crio-687f160eae8e8bb0dfb2b6f75afd851bc934639225e56673ec6bd79295110d10 WatchSource:0}: Error finding container 687f160eae8e8bb0dfb2b6f75afd851bc934639225e56673ec6bd79295110d10: Status 404 returned error can't find the container with id 687f160eae8e8bb0dfb2b6f75afd851bc934639225e56673ec6bd79295110d10 Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.604387 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdf478498-ptdth" event={"ID":"94872b33-329c-42ca-9d90-09c6950dfd83","Type":"ContainerStarted","Data":"687f160eae8e8bb0dfb2b6f75afd851bc934639225e56673ec6bd79295110d10"} Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.607226 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" event={"ID":"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac","Type":"ContainerStarted","Data":"1350e4ea881e68f6fe98606268360098bdeb78fb0b42909468424428303eff5c"} Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.611018 4958 generic.go:334] "Generic (PLEG): container finished" podID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" containerID="d527a630ea4d61d6eb0f13c8d4c75714d192decdc6d656ce04fc555e290f1155" exitCode=0 Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.611105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rd4bk" event={"ID":"4f07f416-2847-46dd-b003-1cb2f1a9dda9","Type":"ContainerDied","Data":"d527a630ea4d61d6eb0f13c8d4c75714d192decdc6d656ce04fc555e290f1155"} Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.624248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerStarted","Data":"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621"} Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.624421 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-central-agent" containerID="cri-o://fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" gracePeriod=30 Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.625317 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.625465 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="sg-core" containerID="cri-o://13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" gracePeriod=30 Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.625502 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-notification-agent" containerID="cri-o://1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" gracePeriod=30 Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.625607 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="proxy-httpd" containerID="cri-o://d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" gracePeriod=30 Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.642841 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" event={"ID":"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef","Type":"ContainerStarted","Data":"2d749d543c294b1a2dd8f01a3b77b14cba82efc73fc6efa7214dec5fb9278949"} Oct 08 06:53:20 crc kubenswrapper[4958]: I1008 06:53:20.680898 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.621126522 podStartE2EDuration="44.680879943s" podCreationTimestamp="2025-10-08 06:52:36 +0000 UTC" firstStartedPulling="2025-10-08 06:52:37.17791786 +0000 UTC m=+1100.307610461" lastFinishedPulling="2025-10-08 06:53:20.237671281 +0000 UTC m=+1143.367363882" observedRunningTime="2025-10-08 06:53:20.673329029 +0000 UTC m=+1143.803021630" watchObservedRunningTime="2025-10-08 06:53:20.680879943 +0000 UTC m=+1143.810572544" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.499077 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.526719 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" podStartSLOduration=9.004279742 podStartE2EDuration="10.52669569s" podCreationTimestamp="2025-10-08 06:53:11 +0000 UTC" firstStartedPulling="2025-10-08 06:53:12.671046945 +0000 UTC m=+1135.800739546" lastFinishedPulling="2025-10-08 06:53:14.193462903 +0000 UTC m=+1137.323155494" observedRunningTime="2025-10-08 06:53:20.693277327 +0000 UTC m=+1143.822969938" watchObservedRunningTime="2025-10-08 06:53:21.52669569 +0000 UTC m=+1144.656388291" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.601677 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-sg-core-conf-yaml\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.601750 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-log-httpd\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.601838 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-scripts\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.601866 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s72r\" (UniqueName: \"kubernetes.io/projected/32a2642e-02a6-45f7-aa64-a98d0fc84c01-kube-api-access-4s72r\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.601908 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-config-data\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.602005 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-combined-ca-bundle\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.602062 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-run-httpd\") pod \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\" (UID: \"32a2642e-02a6-45f7-aa64-a98d0fc84c01\") " Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.602300 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.602591 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.602987 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.609105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-scripts" (OuterVolumeSpecName: "scripts") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.609131 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a2642e-02a6-45f7-aa64-a98d0fc84c01-kube-api-access-4s72r" (OuterVolumeSpecName: "kube-api-access-4s72r") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "kube-api-access-4s72r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.656864 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" event={"ID":"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac","Type":"ContainerStarted","Data":"7226ee00569bafdcf12786ebdd5af9f03b77506f57b10ab6b33eedc12be1ca77"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.659375 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665728 4958 generic.go:334] "Generic (PLEG): container finished" podID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" exitCode=0 Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665771 4958 generic.go:334] "Generic (PLEG): container finished" podID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" exitCode=2 Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665781 4958 generic.go:334] "Generic (PLEG): container finished" podID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" exitCode=0 Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665790 4958 generic.go:334] "Generic (PLEG): container finished" podID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" exitCode=0 Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerDied","Data":"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665911 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerDied","Data":"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665926 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerDied","Data":"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665939 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerDied","Data":"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.665998 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32a2642e-02a6-45f7-aa64-a98d0fc84c01","Type":"ContainerDied","Data":"eb766a2792a999d2c1364ed80804cb655ed7056c20e580edd255f16a63cc1902"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.666017 4958 scope.go:117] "RemoveContainer" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.666163 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.680280 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdf478498-ptdth" event={"ID":"94872b33-329c-42ca-9d90-09c6950dfd83","Type":"ContainerStarted","Data":"9c1c2c4508d314f052c9515beb0473192e5c3aa71f3ed2c51ea5f440955efe26"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.680537 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdf478498-ptdth" event={"ID":"94872b33-329c-42ca-9d90-09c6950dfd83","Type":"ContainerStarted","Data":"8706ba2579f385903f00ba5a57a567212e79efde0f18b7f2408885add9cf0427"} Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.690384 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" podStartSLOduration=9.023631854 podStartE2EDuration="10.690366207s" podCreationTimestamp="2025-10-08 06:53:11 +0000 UTC" firstStartedPulling="2025-10-08 06:53:12.532000952 +0000 UTC m=+1135.661693543" lastFinishedPulling="2025-10-08 06:53:14.198735285 +0000 UTC m=+1137.328427896" observedRunningTime="2025-10-08 06:53:21.686689258 +0000 UTC m=+1144.816381859" watchObservedRunningTime="2025-10-08 06:53:21.690366207 +0000 UTC m=+1144.820058808" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.705369 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32a2642e-02a6-45f7-aa64-a98d0fc84c01-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.705430 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.705442 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.705453 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s72r\" (UniqueName: \"kubernetes.io/projected/32a2642e-02a6-45f7-aa64-a98d0fc84c01-kube-api-access-4s72r\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.737073 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.749805 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-config-data" (OuterVolumeSpecName: "config-data") pod "32a2642e-02a6-45f7-aa64-a98d0fc84c01" (UID: "32a2642e-02a6-45f7-aa64-a98d0fc84c01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.807604 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.807633 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32a2642e-02a6-45f7-aa64-a98d0fc84c01-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.861885 4958 scope.go:117] "RemoveContainer" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.894297 4958 scope.go:117] "RemoveContainer" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.923859 4958 scope.go:117] "RemoveContainer" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.950180 4958 scope.go:117] "RemoveContainer" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" Oct 08 06:53:21 crc kubenswrapper[4958]: E1008 06:53:21.950780 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": container with ID starting with d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621 not found: ID does not exist" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.950831 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621"} err="failed to get container status \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": rpc error: code = NotFound desc = could not find container \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": container with ID starting with d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.950863 4958 scope.go:117] "RemoveContainer" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" Oct 08 06:53:21 crc kubenswrapper[4958]: E1008 06:53:21.951253 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": container with ID starting with 13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e not found: ID does not exist" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.951324 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e"} err="failed to get container status \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": rpc error: code = NotFound desc = could not find container \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": container with ID starting with 13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.951372 4958 scope.go:117] "RemoveContainer" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" Oct 08 06:53:21 crc kubenswrapper[4958]: E1008 06:53:21.951650 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": container with ID starting with 1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c not found: ID does not exist" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.951678 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c"} err="failed to get container status \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": rpc error: code = NotFound desc = could not find container \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": container with ID starting with 1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.951698 4958 scope.go:117] "RemoveContainer" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" Oct 08 06:53:21 crc kubenswrapper[4958]: E1008 06:53:21.951937 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": container with ID starting with fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74 not found: ID does not exist" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.952099 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74"} err="failed to get container status \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": rpc error: code = NotFound desc = could not find container \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": container with ID starting with fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.952116 4958 scope.go:117] "RemoveContainer" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.952375 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621"} err="failed to get container status \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": rpc error: code = NotFound desc = could not find container \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": container with ID starting with d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.952408 4958 scope.go:117] "RemoveContainer" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.953013 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e"} err="failed to get container status \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": rpc error: code = NotFound desc = could not find container \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": container with ID starting with 13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.953039 4958 scope.go:117] "RemoveContainer" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.953358 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c"} err="failed to get container status \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": rpc error: code = NotFound desc = could not find container \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": container with ID starting with 1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.953375 4958 scope.go:117] "RemoveContainer" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.954334 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74"} err="failed to get container status \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": rpc error: code = NotFound desc = could not find container \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": container with ID starting with fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.956213 4958 scope.go:117] "RemoveContainer" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.962852 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621"} err="failed to get container status \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": rpc error: code = NotFound desc = could not find container \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": container with ID starting with d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.963029 4958 scope.go:117] "RemoveContainer" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.963610 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e"} err="failed to get container status \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": rpc error: code = NotFound desc = could not find container \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": container with ID starting with 13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.963630 4958 scope.go:117] "RemoveContainer" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.963931 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c"} err="failed to get container status \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": rpc error: code = NotFound desc = could not find container \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": container with ID starting with 1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.963992 4958 scope.go:117] "RemoveContainer" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964201 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74"} err="failed to get container status \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": rpc error: code = NotFound desc = could not find container \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": container with ID starting with fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964216 4958 scope.go:117] "RemoveContainer" containerID="d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964388 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621"} err="failed to get container status \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": rpc error: code = NotFound desc = could not find container \"d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621\": container with ID starting with d7e6d54bb2ed12e3e47754f46280010f309924a82b5c3ee592886379ad60c621 not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964404 4958 scope.go:117] "RemoveContainer" containerID="13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964582 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e"} err="failed to get container status \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": rpc error: code = NotFound desc = could not find container \"13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e\": container with ID starting with 13ab91cf6547eecc7d4af69c6a4ba3068f388e94a731ac60a3218b405f35804e not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964607 4958 scope.go:117] "RemoveContainer" containerID="1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964801 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c"} err="failed to get container status \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": rpc error: code = NotFound desc = could not find container \"1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c\": container with ID starting with 1c4c6c1f4cf7e15e6d31668773e4c6e8bdeccdf55d2962138873f6bbd325099c not found: ID does not exist" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.964816 4958 scope.go:117] "RemoveContainer" containerID="fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74" Oct 08 06:53:21 crc kubenswrapper[4958]: I1008 06:53:21.965060 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74"} err="failed to get container status \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": rpc error: code = NotFound desc = could not find container \"fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74\": container with ID starting with fd176e12dbe0867ef41c8a91928ff88d3792e84f312117d31aa427275e171c74 not found: ID does not exist" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.009118 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cdf478498-ptdth" podStartSLOduration=8.009098709 podStartE2EDuration="8.009098709s" podCreationTimestamp="2025-10-08 06:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:21.71899998 +0000 UTC m=+1144.848692581" watchObservedRunningTime="2025-10-08 06:53:22.009098709 +0000 UTC m=+1145.138791310" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.017102 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.032986 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.051004 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.051645 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="sg-core" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.051731 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="sg-core" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.051845 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-notification-agent" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.051975 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-notification-agent" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.052078 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-central-agent" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.052146 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-central-agent" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.052215 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="proxy-httpd" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.052270 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="proxy-httpd" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.052478 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="sg-core" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.052537 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-central-agent" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.052602 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="proxy-httpd" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.052662 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" containerName="ceilometer-notification-agent" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.054302 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.056031 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.061348 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.061459 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.069346 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116132 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-log-httpd\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116168 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116230 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116252 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-config-data\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116295 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-scripts\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxp8\" (UniqueName: \"kubernetes.io/projected/9c88f30e-9d32-47da-b268-9bd7321f197d-kube-api-access-8vxp8\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.116350 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-run-httpd\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.147650 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-b72j7"] Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.148154 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerName="dnsmasq-dns" containerID="cri-o://59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0" gracePeriod=10 Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.180086 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.217288 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f07f416-2847-46dd-b003-1cb2f1a9dda9-etc-machine-id\") pod \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.217537 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f07f416-2847-46dd-b003-1cb2f1a9dda9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4f07f416-2847-46dd-b003-1cb2f1a9dda9" (UID: "4f07f416-2847-46dd-b003-1cb2f1a9dda9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.217655 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-combined-ca-bundle\") pod \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.217767 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-config-data\") pod \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.217863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-scripts\") pod \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.218461 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-db-sync-config-data\") pod \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.218618 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67q2b\" (UniqueName: \"kubernetes.io/projected/4f07f416-2847-46dd-b003-1cb2f1a9dda9-kube-api-access-67q2b\") pod \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\" (UID: \"4f07f416-2847-46dd-b003-1cb2f1a9dda9\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.218850 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.218930 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-log-httpd\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.219053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.219131 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-config-data\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.219235 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-scripts\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.219397 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxp8\" (UniqueName: \"kubernetes.io/projected/9c88f30e-9d32-47da-b268-9bd7321f197d-kube-api-access-8vxp8\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.219680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-log-httpd\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.219941 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-run-httpd\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.220296 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-run-httpd\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.220510 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f07f416-2847-46dd-b003-1cb2f1a9dda9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.227566 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f07f416-2847-46dd-b003-1cb2f1a9dda9-kube-api-access-67q2b" (OuterVolumeSpecName: "kube-api-access-67q2b") pod "4f07f416-2847-46dd-b003-1cb2f1a9dda9" (UID: "4f07f416-2847-46dd-b003-1cb2f1a9dda9"). InnerVolumeSpecName "kube-api-access-67q2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.227568 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4f07f416-2847-46dd-b003-1cb2f1a9dda9" (UID: "4f07f416-2847-46dd-b003-1cb2f1a9dda9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.227875 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-scripts" (OuterVolumeSpecName: "scripts") pod "4f07f416-2847-46dd-b003-1cb2f1a9dda9" (UID: "4f07f416-2847-46dd-b003-1cb2f1a9dda9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.228117 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.228278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-config-data\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.228483 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.230498 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-scripts\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.243679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxp8\" (UniqueName: \"kubernetes.io/projected/9c88f30e-9d32-47da-b268-9bd7321f197d-kube-api-access-8vxp8\") pod \"ceilometer-0\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.251063 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f07f416-2847-46dd-b003-1cb2f1a9dda9" (UID: "4f07f416-2847-46dd-b003-1cb2f1a9dda9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.273678 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-config-data" (OuterVolumeSpecName: "config-data") pod "4f07f416-2847-46dd-b003-1cb2f1a9dda9" (UID: "4f07f416-2847-46dd-b003-1cb2f1a9dda9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.324179 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.324215 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.324227 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.324237 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f07f416-2847-46dd-b003-1cb2f1a9dda9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.324248 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67q2b\" (UniqueName: \"kubernetes.io/projected/4f07f416-2847-46dd-b003-1cb2f1a9dda9-kube-api-access-67q2b\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.470673 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.587018 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.693494 4958 generic.go:334] "Generic (PLEG): container finished" podID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerID="59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0" exitCode=0 Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.693738 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" event={"ID":"189707bd-c3af-4922-bbf0-75c3bc5f4ad0","Type":"ContainerDied","Data":"59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0"} Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.693760 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" event={"ID":"189707bd-c3af-4922-bbf0-75c3bc5f4ad0","Type":"ContainerDied","Data":"92edb415bd6f035b16961c3d5773f0dd855ebb95b5487cfaee0796025274eb99"} Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.693776 4958 scope.go:117] "RemoveContainer" containerID="59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.693860 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b55c5465-b72j7" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.699804 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rd4bk" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.705834 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rd4bk" event={"ID":"4f07f416-2847-46dd-b003-1cb2f1a9dda9","Type":"ContainerDied","Data":"89d8eae2baca6fb08cf89c3b33641f263e1d69ee1823eb5691e15a183b13b809"} Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.706062 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d8eae2baca6fb08cf89c3b33641f263e1d69ee1823eb5691e15a183b13b809" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.706642 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.706732 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.733289 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-config\") pod \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.733330 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-swift-storage-0\") pod \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.733398 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-sb\") pod \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.733436 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trl7r\" (UniqueName: \"kubernetes.io/projected/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-kube-api-access-trl7r\") pod \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.733462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-svc\") pod \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.733515 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-nb\") pod \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\" (UID: \"189707bd-c3af-4922-bbf0-75c3bc5f4ad0\") " Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.745061 4958 scope.go:117] "RemoveContainer" containerID="84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.747346 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-kube-api-access-trl7r" (OuterVolumeSpecName: "kube-api-access-trl7r") pod "189707bd-c3af-4922-bbf0-75c3bc5f4ad0" (UID: "189707bd-c3af-4922-bbf0-75c3bc5f4ad0"). InnerVolumeSpecName "kube-api-access-trl7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.837606 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trl7r\" (UniqueName: \"kubernetes.io/projected/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-kube-api-access-trl7r\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.859936 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.860402 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" containerName="cinder-db-sync" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.860417 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" containerName="cinder-db-sync" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.860435 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerName="init" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.860442 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerName="init" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.860475 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerName="dnsmasq-dns" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.860482 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerName="dnsmasq-dns" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.860676 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" containerName="cinder-db-sync" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.860712 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" containerName="dnsmasq-dns" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.861867 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.863860 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "189707bd-c3af-4922-bbf0-75c3bc5f4ad0" (UID: "189707bd-c3af-4922-bbf0-75c3bc5f4ad0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.864199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "189707bd-c3af-4922-bbf0-75c3bc5f4ad0" (UID: "189707bd-c3af-4922-bbf0-75c3bc5f4ad0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.864638 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-htpjt" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.864869 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.866488 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.871218 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "189707bd-c3af-4922-bbf0-75c3bc5f4ad0" (UID: "189707bd-c3af-4922-bbf0-75c3bc5f4ad0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.893933 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.902379 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.921640 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "189707bd-c3af-4922-bbf0-75c3bc5f4ad0" (UID: "189707bd-c3af-4922-bbf0-75c3bc5f4ad0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.926119 4958 scope.go:117] "RemoveContainer" containerID="59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.930128 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0\": container with ID starting with 59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0 not found: ID does not exist" containerID="59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.930175 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0"} err="failed to get container status \"59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0\": rpc error: code = NotFound desc = could not find container \"59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0\": container with ID starting with 59c51bd39db41e729e27ea3ea7f82e9cdce3d659d33fd86f33a81a185da47cd0 not found: ID does not exist" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.930199 4958 scope.go:117] "RemoveContainer" containerID="84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27" Oct 08 06:53:22 crc kubenswrapper[4958]: E1008 06:53:22.934396 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27\": container with ID starting with 84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27 not found: ID does not exist" containerID="84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.934445 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27"} err="failed to get container status \"84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27\": rpc error: code = NotFound desc = could not find container \"84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27\": container with ID starting with 84b3daba799753ac2900d6b1645f78027c3f9577bd3e76d99a554ef289d16f27 not found: ID does not exist" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.939683 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.939760 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.939829 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.939888 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:22 crc kubenswrapper[4958]: I1008 06:53:22.975087 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-config" (OuterVolumeSpecName: "config") pod "189707bd-c3af-4922-bbf0-75c3bc5f4ad0" (UID: "189707bd-c3af-4922-bbf0-75c3bc5f4ad0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.026956 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-h5t57"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.032722 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.042748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f6f8889-0d97-4303-a8b7-e0845c79fecf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.043006 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.043190 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.043435 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.043614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88w6n\" (UniqueName: \"kubernetes.io/projected/6f6f8889-0d97-4303-a8b7-e0845c79fecf-kube-api-access-88w6n\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.043798 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.043982 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189707bd-c3af-4922-bbf0-75c3bc5f4ad0-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.072686 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-h5t57"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.102313 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.117167 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-b72j7"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.124782 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b55c5465-b72j7"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.135074 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.136629 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.139732 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149477 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88w6n\" (UniqueName: \"kubernetes.io/projected/6f6f8889-0d97-4303-a8b7-e0845c79fecf-kube-api-access-88w6n\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149605 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149655 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-svc\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149671 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-config\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149716 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f6f8889-0d97-4303-a8b7-e0845c79fecf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.149753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.150282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f6f8889-0d97-4303-a8b7-e0845c79fecf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.150349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjld\" (UniqueName: \"kubernetes.io/projected/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-kube-api-access-hjjld\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.150383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.165409 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.165502 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.165800 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.166418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.166452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88w6n\" (UniqueName: \"kubernetes.io/projected/6f6f8889-0d97-4303-a8b7-e0845c79fecf-kube-api-access-88w6n\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.170341 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.232174 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252049 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252135 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252170 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4477d45d-e69a-4c74-b676-4a1568a1a6db-logs\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252200 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4477d45d-e69a-4c74-b676-4a1568a1a6db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252215 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data-custom\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252237 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-svc\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252252 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-config\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252296 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjld\" (UniqueName: \"kubernetes.io/projected/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-kube-api-access-hjjld\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252337 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pff\" (UniqueName: \"kubernetes.io/projected/4477d45d-e69a-4c74-b676-4a1568a1a6db-kube-api-access-v5pff\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.252411 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-scripts\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.253920 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-sb\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.254055 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-nb\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.254452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-config\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.254881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-swift-storage-0\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.255446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-svc\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.269379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjld\" (UniqueName: \"kubernetes.io/projected/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-kube-api-access-hjjld\") pod \"dnsmasq-dns-84bd785c49-h5t57\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355365 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-scripts\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355413 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355456 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4477d45d-e69a-4c74-b676-4a1568a1a6db-logs\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355485 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4477d45d-e69a-4c74-b676-4a1568a1a6db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data-custom\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.355572 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pff\" (UniqueName: \"kubernetes.io/projected/4477d45d-e69a-4c74-b676-4a1568a1a6db-kube-api-access-v5pff\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.356040 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4477d45d-e69a-4c74-b676-4a1568a1a6db-logs\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.356065 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4477d45d-e69a-4c74-b676-4a1568a1a6db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.359355 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-scripts\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.359822 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.366829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data-custom\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.368126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.375705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pff\" (UniqueName: \"kubernetes.io/projected/4477d45d-e69a-4c74-b676-4a1568a1a6db-kube-api-access-v5pff\") pod \"cinder-api-0\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.383982 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.455786 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.590861 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189707bd-c3af-4922-bbf0-75c3bc5f4ad0" path="/var/lib/kubelet/pods/189707bd-c3af-4922-bbf0-75c3bc5f4ad0/volumes" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.592331 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a2642e-02a6-45f7-aa64-a98d0fc84c01" path="/var/lib/kubelet/pods/32a2642e-02a6-45f7-aa64-a98d0fc84c01/volumes" Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.711706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerStarted","Data":"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654"} Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.712011 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerStarted","Data":"1d6019f8fe60c99cc670b46cde580e94548bce4678020fed39b265acbba5be47"} Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.749231 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.857831 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:23 crc kubenswrapper[4958]: I1008 06:53:23.903216 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-h5t57"] Oct 08 06:53:23 crc kubenswrapper[4958]: W1008 06:53:23.909872 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d6dbb43_cc4d_47e5_97c6_76d1638d50e0.slice/crio-ad0ee06a20737348d29935fac69ad248e8b2d92456459ab81f9f23750c85af32 WatchSource:0}: Error finding container ad0ee06a20737348d29935fac69ad248e8b2d92456459ab81f9f23750c85af32: Status 404 returned error can't find the container with id ad0ee06a20737348d29935fac69ad248e8b2d92456459ab81f9f23750c85af32 Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.748105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerStarted","Data":"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642"} Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.752442 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerID="7252a20e8b89a3faceec9505d6ce02b27827cd7944c4322b31f28639ca45374a" exitCode=0 Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.752557 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" event={"ID":"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0","Type":"ContainerDied","Data":"7252a20e8b89a3faceec9505d6ce02b27827cd7944c4322b31f28639ca45374a"} Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.752607 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" event={"ID":"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0","Type":"ContainerStarted","Data":"ad0ee06a20737348d29935fac69ad248e8b2d92456459ab81f9f23750c85af32"} Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.754696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f6f8889-0d97-4303-a8b7-e0845c79fecf","Type":"ContainerStarted","Data":"9cbb2d969f219b45d318365a8f19e05c843e76c3c3767deb6d52f53869d4b631"} Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.765922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4477d45d-e69a-4c74-b676-4a1568a1a6db","Type":"ContainerStarted","Data":"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730"} Oct 08 06:53:24 crc kubenswrapper[4958]: I1008 06:53:24.765996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4477d45d-e69a-4c74-b676-4a1568a1a6db","Type":"ContainerStarted","Data":"39816e54f19257b9d9300f264f69e2dff0526101f7e50f4714c669a0dda45b41"} Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.261443 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.777101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f6f8889-0d97-4303-a8b7-e0845c79fecf","Type":"ContainerStarted","Data":"afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc"} Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.777162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f6f8889-0d97-4303-a8b7-e0845c79fecf","Type":"ContainerStarted","Data":"c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87"} Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.781155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4477d45d-e69a-4c74-b676-4a1568a1a6db","Type":"ContainerStarted","Data":"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89"} Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.781356 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.786592 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerStarted","Data":"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326"} Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.788171 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" event={"ID":"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0","Type":"ContainerStarted","Data":"029ca8abfaf31b17ac5aed49a82e63c7ea88669308d49b70385f7b6663bd95b1"} Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.788894 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.795839 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.103429482 podStartE2EDuration="3.795820389s" podCreationTimestamp="2025-10-08 06:53:22 +0000 UTC" firstStartedPulling="2025-10-08 06:53:23.771158146 +0000 UTC m=+1146.900850747" lastFinishedPulling="2025-10-08 06:53:24.463549053 +0000 UTC m=+1147.593241654" observedRunningTime="2025-10-08 06:53:25.793051604 +0000 UTC m=+1148.922744205" watchObservedRunningTime="2025-10-08 06:53:25.795820389 +0000 UTC m=+1148.925512990" Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.829832 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.829814906 podStartE2EDuration="2.829814906s" podCreationTimestamp="2025-10-08 06:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:25.82549109 +0000 UTC m=+1148.955183701" watchObservedRunningTime="2025-10-08 06:53:25.829814906 +0000 UTC m=+1148.959507507" Oct 08 06:53:25 crc kubenswrapper[4958]: I1008 06:53:25.850259 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" podStartSLOduration=3.850242558 podStartE2EDuration="3.850242558s" podCreationTimestamp="2025-10-08 06:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:25.84513544 +0000 UTC m=+1148.974828041" watchObservedRunningTime="2025-10-08 06:53:25.850242558 +0000 UTC m=+1148.979935159" Oct 08 06:53:26 crc kubenswrapper[4958]: I1008 06:53:26.804645 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerStarted","Data":"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312"} Oct 08 06:53:26 crc kubenswrapper[4958]: I1008 06:53:26.805657 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api-log" containerID="cri-o://9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730" gracePeriod=30 Oct 08 06:53:26 crc kubenswrapper[4958]: I1008 06:53:26.805784 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api" containerID="cri-o://94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89" gracePeriod=30 Oct 08 06:53:26 crc kubenswrapper[4958]: I1008 06:53:26.837149 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7332605829999999 podStartE2EDuration="4.837134283s" podCreationTimestamp="2025-10-08 06:53:22 +0000 UTC" firstStartedPulling="2025-10-08 06:53:23.003180279 +0000 UTC m=+1146.132872880" lastFinishedPulling="2025-10-08 06:53:26.107053979 +0000 UTC m=+1149.236746580" observedRunningTime="2025-10-08 06:53:26.835489419 +0000 UTC m=+1149.965182020" watchObservedRunningTime="2025-10-08 06:53:26.837134283 +0000 UTC m=+1149.966826884" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.449018 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.544864 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4477d45d-e69a-4c74-b676-4a1568a1a6db-etc-machine-id\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.544914 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4477d45d-e69a-4c74-b676-4a1568a1a6db-logs\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.544981 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5pff\" (UniqueName: \"kubernetes.io/projected/4477d45d-e69a-4c74-b676-4a1568a1a6db-kube-api-access-v5pff\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.545019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data-custom\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.545108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-scripts\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.545122 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-combined-ca-bundle\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.545224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data\") pod \"4477d45d-e69a-4c74-b676-4a1568a1a6db\" (UID: \"4477d45d-e69a-4c74-b676-4a1568a1a6db\") " Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.545550 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4477d45d-e69a-4c74-b676-4a1568a1a6db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.545690 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4477d45d-e69a-4c74-b676-4a1568a1a6db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.546031 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4477d45d-e69a-4c74-b676-4a1568a1a6db-logs" (OuterVolumeSpecName: "logs") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.551426 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-scripts" (OuterVolumeSpecName: "scripts") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.552616 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4477d45d-e69a-4c74-b676-4a1568a1a6db-kube-api-access-v5pff" (OuterVolumeSpecName: "kube-api-access-v5pff") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "kube-api-access-v5pff". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.553073 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.571626 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.606881 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data" (OuterVolumeSpecName: "config-data") pod "4477d45d-e69a-4c74-b676-4a1568a1a6db" (UID: "4477d45d-e69a-4c74-b676-4a1568a1a6db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.648036 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.648073 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.648087 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.648102 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4477d45d-e69a-4c74-b676-4a1568a1a6db-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.648114 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5pff\" (UniqueName: \"kubernetes.io/projected/4477d45d-e69a-4c74-b676-4a1568a1a6db-kube-api-access-v5pff\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.648128 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4477d45d-e69a-4c74-b676-4a1568a1a6db-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.817942 4958 generic.go:334] "Generic (PLEG): container finished" podID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerID="94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89" exitCode=0 Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.818006 4958 generic.go:334] "Generic (PLEG): container finished" podID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerID="9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730" exitCode=143 Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.818055 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4477d45d-e69a-4c74-b676-4a1568a1a6db","Type":"ContainerDied","Data":"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89"} Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.818098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4477d45d-e69a-4c74-b676-4a1568a1a6db","Type":"ContainerDied","Data":"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730"} Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.818112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4477d45d-e69a-4c74-b676-4a1568a1a6db","Type":"ContainerDied","Data":"39816e54f19257b9d9300f264f69e2dff0526101f7e50f4714c669a0dda45b41"} Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.818132 4958 scope.go:117] "RemoveContainer" containerID="94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.819129 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.819186 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.866686 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.872762 4958 scope.go:117] "RemoveContainer" containerID="9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.891928 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.924191 4958 scope.go:117] "RemoveContainer" containerID="94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89" Oct 08 06:53:27 crc kubenswrapper[4958]: E1008 06:53:27.925871 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89\": container with ID starting with 94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89 not found: ID does not exist" containerID="94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.925951 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89"} err="failed to get container status \"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89\": rpc error: code = NotFound desc = could not find container \"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89\": container with ID starting with 94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89 not found: ID does not exist" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.925998 4958 scope.go:117] "RemoveContainer" containerID="9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730" Oct 08 06:53:27 crc kubenswrapper[4958]: E1008 06:53:27.926324 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730\": container with ID starting with 9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730 not found: ID does not exist" containerID="9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.926361 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730"} err="failed to get container status \"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730\": rpc error: code = NotFound desc = could not find container \"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730\": container with ID starting with 9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730 not found: ID does not exist" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.926384 4958 scope.go:117] "RemoveContainer" containerID="94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.926636 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89"} err="failed to get container status \"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89\": rpc error: code = NotFound desc = could not find container \"94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89\": container with ID starting with 94d4d0408c27ab4f710dd4c35502a9e8750ea398a4770e84e070630a381e1b89 not found: ID does not exist" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.926674 4958 scope.go:117] "RemoveContainer" containerID="9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.927014 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730"} err="failed to get container status \"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730\": rpc error: code = NotFound desc = could not find container \"9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730\": container with ID starting with 9180ed04324a19bd6dfaf64aed5ac50dfef5481ba59830ba61c7e5b9dc462730 not found: ID does not exist" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.927096 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:27 crc kubenswrapper[4958]: E1008 06:53:27.927779 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.927798 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api" Oct 08 06:53:27 crc kubenswrapper[4958]: E1008 06:53:27.927857 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api-log" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.927864 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api-log" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.928255 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.928285 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" containerName="cinder-api-log" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.929855 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.940918 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.941760 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.941932 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.942358 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955566 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955633 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2ac8efb-5e1d-4b4c-beba-7d287a699044-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955730 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v496q\" (UniqueName: \"kubernetes.io/projected/b2ac8efb-5e1d-4b4c-beba-7d287a699044-kube-api-access-v496q\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955749 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955779 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-scripts\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955798 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2ac8efb-5e1d-4b4c-beba-7d287a699044-logs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:27 crc kubenswrapper[4958]: I1008 06:53:27.955822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058457 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058568 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2ac8efb-5e1d-4b4c-beba-7d287a699044-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058731 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v496q\" (UniqueName: \"kubernetes.io/projected/b2ac8efb-5e1d-4b4c-beba-7d287a699044-kube-api-access-v496q\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058774 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-scripts\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058886 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2ac8efb-5e1d-4b4c-beba-7d287a699044-logs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.058938 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.059110 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.060382 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2ac8efb-5e1d-4b4c-beba-7d287a699044-logs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.060835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2ac8efb-5e1d-4b4c-beba-7d287a699044-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.066530 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.070422 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.071888 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-scripts\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.073761 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.075459 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.077645 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.091236 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v496q\" (UniqueName: \"kubernetes.io/projected/b2ac8efb-5e1d-4b4c-beba-7d287a699044-kube-api-access-v496q\") pod \"cinder-api-0\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.232564 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.283515 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:53:28 crc kubenswrapper[4958]: W1008 06:53:28.808407 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ac8efb_5e1d_4b4c_beba_7d287a699044.slice/crio-4bcc3d48d5eaf51115d6c76633d0e9f1325d758b2beef62c6bf04e4aa468c0cc WatchSource:0}: Error finding container 4bcc3d48d5eaf51115d6c76633d0e9f1325d758b2beef62c6bf04e4aa468c0cc: Status 404 returned error can't find the container with id 4bcc3d48d5eaf51115d6c76633d0e9f1325d758b2beef62c6bf04e4aa468c0cc Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.815742 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:53:28 crc kubenswrapper[4958]: I1008 06:53:28.830070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2ac8efb-5e1d-4b4c-beba-7d287a699044","Type":"ContainerStarted","Data":"4bcc3d48d5eaf51115d6c76633d0e9f1325d758b2beef62c6bf04e4aa468c0cc"} Oct 08 06:53:29 crc kubenswrapper[4958]: I1008 06:53:29.590031 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4477d45d-e69a-4c74-b676-4a1568a1a6db" path="/var/lib/kubelet/pods/4477d45d-e69a-4c74-b676-4a1568a1a6db/volumes" Oct 08 06:53:29 crc kubenswrapper[4958]: I1008 06:53:29.717435 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:53:29 crc kubenswrapper[4958]: I1008 06:53:29.843495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2ac8efb-5e1d-4b4c-beba-7d287a699044","Type":"ContainerStarted","Data":"1ad480ebf1447ef4dce8f921ab07bf404169d88c3b6f87c0b44604976474fd46"} Oct 08 06:53:30 crc kubenswrapper[4958]: I1008 06:53:30.853451 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2ac8efb-5e1d-4b4c-beba-7d287a699044","Type":"ContainerStarted","Data":"ab70053ebb2c4509de5852c1600f0da9043c9b356dde235137ef97c1e51aebf6"} Oct 08 06:53:30 crc kubenswrapper[4958]: I1008 06:53:30.853809 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 06:53:30 crc kubenswrapper[4958]: I1008 06:53:30.880818 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.880799928 podStartE2EDuration="3.880799928s" podCreationTimestamp="2025-10-08 06:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:30.87644199 +0000 UTC m=+1154.006134591" watchObservedRunningTime="2025-10-08 06:53:30.880799928 +0000 UTC m=+1154.010492529" Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.180131 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.317534 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.374853 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-67c44c8db8-5fbxd"] Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.376150 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-67c44c8db8-5fbxd" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api-log" containerID="cri-o://3fa90dad8a7721798b2697fcc6a430121bab915e5b47d28f33472097cb2d2201" gracePeriod=30 Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.376286 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-67c44c8db8-5fbxd" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api" containerID="cri-o://b6c7d3a811f444cf71be03e845aacb115c373eec9e132255314c4364810bcbd9" gracePeriod=30 Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.548028 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.613596 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59fbcb7b56-6l4nd"] Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.613805 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59fbcb7b56-6l4nd" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-api" containerID="cri-o://d0cc28d4790c4c0860f14e72ecd3d5a93c02b076a73ffb7554b804dd7bcb1b95" gracePeriod=30 Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.614236 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59fbcb7b56-6l4nd" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-httpd" containerID="cri-o://7ae853c3c14d48e5ea5ea57fa011c04cf97badf862157908056c61281c0f50a9" gracePeriod=30 Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.865026 4958 generic.go:334] "Generic (PLEG): container finished" podID="db18904a-8657-440d-ad39-60d3bb7907c3" containerID="7ae853c3c14d48e5ea5ea57fa011c04cf97badf862157908056c61281c0f50a9" exitCode=0 Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.865088 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fbcb7b56-6l4nd" event={"ID":"db18904a-8657-440d-ad39-60d3bb7907c3","Type":"ContainerDied","Data":"7ae853c3c14d48e5ea5ea57fa011c04cf97badf862157908056c61281c0f50a9"} Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.871143 4958 generic.go:334] "Generic (PLEG): container finished" podID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerID="3fa90dad8a7721798b2697fcc6a430121bab915e5b47d28f33472097cb2d2201" exitCode=143 Oct 08 06:53:31 crc kubenswrapper[4958]: I1008 06:53:31.871160 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c44c8db8-5fbxd" event={"ID":"ba5740e5-447f-4104-a07d-bc0e7092d962","Type":"ContainerDied","Data":"3fa90dad8a7721798b2697fcc6a430121bab915e5b47d28f33472097cb2d2201"} Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.386246 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.459874 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.528520 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-n44pc"] Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.528920 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" podUID="607fcdde-da07-446c-875b-946ab8ec617e" containerName="dnsmasq-dns" containerID="cri-o://73006da58f9127ee328a44c224b98063089605f5570b8c21d85268a25a690db7" gracePeriod=10 Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.556372 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.891349 4958 generic.go:334] "Generic (PLEG): container finished" podID="607fcdde-da07-446c-875b-946ab8ec617e" containerID="73006da58f9127ee328a44c224b98063089605f5570b8c21d85268a25a690db7" exitCode=0 Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.891400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" event={"ID":"607fcdde-da07-446c-875b-946ab8ec617e","Type":"ContainerDied","Data":"73006da58f9127ee328a44c224b98063089605f5570b8c21d85268a25a690db7"} Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.892938 4958 generic.go:334] "Generic (PLEG): container finished" podID="db18904a-8657-440d-ad39-60d3bb7907c3" containerID="d0cc28d4790c4c0860f14e72ecd3d5a93c02b076a73ffb7554b804dd7bcb1b95" exitCode=0 Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.893005 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fbcb7b56-6l4nd" event={"ID":"db18904a-8657-440d-ad39-60d3bb7907c3","Type":"ContainerDied","Data":"d0cc28d4790c4c0860f14e72ecd3d5a93c02b076a73ffb7554b804dd7bcb1b95"} Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.893171 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="cinder-scheduler" containerID="cri-o://c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87" gracePeriod=30 Oct 08 06:53:33 crc kubenswrapper[4958]: I1008 06:53:33.893248 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="probe" containerID="cri-o://afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc" gracePeriod=30 Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.051991 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.064376 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.184713 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-config\") pod \"607fcdde-da07-446c-875b-946ab8ec617e\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.184767 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-ovndb-tls-certs\") pod \"db18904a-8657-440d-ad39-60d3bb7907c3\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.184845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-svc\") pod \"607fcdde-da07-446c-875b-946ab8ec617e\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.184921 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-combined-ca-bundle\") pod \"db18904a-8657-440d-ad39-60d3bb7907c3\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.184947 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-sb\") pod \"607fcdde-da07-446c-875b-946ab8ec617e\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.185020 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-nb\") pod \"607fcdde-da07-446c-875b-946ab8ec617e\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.185070 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7fb\" (UniqueName: \"kubernetes.io/projected/607fcdde-da07-446c-875b-946ab8ec617e-kube-api-access-4d7fb\") pod \"607fcdde-da07-446c-875b-946ab8ec617e\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.185101 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-httpd-config\") pod \"db18904a-8657-440d-ad39-60d3bb7907c3\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.185151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-config\") pod \"db18904a-8657-440d-ad39-60d3bb7907c3\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.185171 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zz7c\" (UniqueName: \"kubernetes.io/projected/db18904a-8657-440d-ad39-60d3bb7907c3-kube-api-access-9zz7c\") pod \"db18904a-8657-440d-ad39-60d3bb7907c3\" (UID: \"db18904a-8657-440d-ad39-60d3bb7907c3\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.185199 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-swift-storage-0\") pod \"607fcdde-da07-446c-875b-946ab8ec617e\" (UID: \"607fcdde-da07-446c-875b-946ab8ec617e\") " Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.191473 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607fcdde-da07-446c-875b-946ab8ec617e-kube-api-access-4d7fb" (OuterVolumeSpecName: "kube-api-access-4d7fb") pod "607fcdde-da07-446c-875b-946ab8ec617e" (UID: "607fcdde-da07-446c-875b-946ab8ec617e"). InnerVolumeSpecName "kube-api-access-4d7fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.191733 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db18904a-8657-440d-ad39-60d3bb7907c3" (UID: "db18904a-8657-440d-ad39-60d3bb7907c3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.200688 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db18904a-8657-440d-ad39-60d3bb7907c3-kube-api-access-9zz7c" (OuterVolumeSpecName: "kube-api-access-9zz7c") pod "db18904a-8657-440d-ad39-60d3bb7907c3" (UID: "db18904a-8657-440d-ad39-60d3bb7907c3"). InnerVolumeSpecName "kube-api-access-9zz7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.236317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-config" (OuterVolumeSpecName: "config") pod "607fcdde-da07-446c-875b-946ab8ec617e" (UID: "607fcdde-da07-446c-875b-946ab8ec617e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.239302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "607fcdde-da07-446c-875b-946ab8ec617e" (UID: "607fcdde-da07-446c-875b-946ab8ec617e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.241679 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "607fcdde-da07-446c-875b-946ab8ec617e" (UID: "607fcdde-da07-446c-875b-946ab8ec617e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.243473 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "607fcdde-da07-446c-875b-946ab8ec617e" (UID: "607fcdde-da07-446c-875b-946ab8ec617e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.249458 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "607fcdde-da07-446c-875b-946ab8ec617e" (UID: "607fcdde-da07-446c-875b-946ab8ec617e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.251401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-config" (OuterVolumeSpecName: "config") pod "db18904a-8657-440d-ad39-60d3bb7907c3" (UID: "db18904a-8657-440d-ad39-60d3bb7907c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.262276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db18904a-8657-440d-ad39-60d3bb7907c3" (UID: "db18904a-8657-440d-ad39-60d3bb7907c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.274297 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db18904a-8657-440d-ad39-60d3bb7907c3" (UID: "db18904a-8657-440d-ad39-60d3bb7907c3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288093 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288226 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7fb\" (UniqueName: \"kubernetes.io/projected/607fcdde-da07-446c-875b-946ab8ec617e-kube-api-access-4d7fb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288335 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288411 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288519 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zz7c\" (UniqueName: \"kubernetes.io/projected/db18904a-8657-440d-ad39-60d3bb7907c3-kube-api-access-9zz7c\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288578 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288683 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288762 4958 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288822 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288886 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db18904a-8657-440d-ad39-60d3bb7907c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.288989 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/607fcdde-da07-446c-875b-946ab8ec617e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.542417 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-67c44c8db8-5fbxd" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:36058->10.217.0.161:9311: read: connection reset by peer" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.542455 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-67c44c8db8-5fbxd" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:36072->10.217.0.161:9311: read: connection reset by peer" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.904111 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59fbcb7b56-6l4nd" event={"ID":"db18904a-8657-440d-ad39-60d3bb7907c3","Type":"ContainerDied","Data":"2fee8e0914ab36e7c347281f5b6c1bc65920db980326ca75145908beaf0cd4e8"} Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.904145 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59fbcb7b56-6l4nd" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.904375 4958 scope.go:117] "RemoveContainer" containerID="7ae853c3c14d48e5ea5ea57fa011c04cf97badf862157908056c61281c0f50a9" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.906831 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.907234 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c78787df7-n44pc" event={"ID":"607fcdde-da07-446c-875b-946ab8ec617e","Type":"ContainerDied","Data":"61820470c263ecf953bb27165da269631474929b13863df64d73782fe3bfb862"} Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.910111 4958 generic.go:334] "Generic (PLEG): container finished" podID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerID="b6c7d3a811f444cf71be03e845aacb115c373eec9e132255314c4364810bcbd9" exitCode=0 Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.910177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c44c8db8-5fbxd" event={"ID":"ba5740e5-447f-4104-a07d-bc0e7092d962","Type":"ContainerDied","Data":"b6c7d3a811f444cf71be03e845aacb115c373eec9e132255314c4364810bcbd9"} Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.910202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c44c8db8-5fbxd" event={"ID":"ba5740e5-447f-4104-a07d-bc0e7092d962","Type":"ContainerDied","Data":"69efbabc1dfd267c0d7119fcfe4bd6725af9ecc613b4a6aa42e217099d0d1823"} Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.910213 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69efbabc1dfd267c0d7119fcfe4bd6725af9ecc613b4a6aa42e217099d0d1823" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.911928 4958 generic.go:334] "Generic (PLEG): container finished" podID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerID="afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc" exitCode=0 Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.911991 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f6f8889-0d97-4303-a8b7-e0845c79fecf","Type":"ContainerDied","Data":"afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc"} Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.979372 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:34 crc kubenswrapper[4958]: I1008 06:53:34.997815 4958 scope.go:117] "RemoveContainer" containerID="d0cc28d4790c4c0860f14e72ecd3d5a93c02b076a73ffb7554b804dd7bcb1b95" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.000347 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59fbcb7b56-6l4nd"] Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.016178 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59fbcb7b56-6l4nd"] Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.026709 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-n44pc"] Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.037099 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c78787df7-n44pc"] Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.038248 4958 scope.go:117] "RemoveContainer" containerID="73006da58f9127ee328a44c224b98063089605f5570b8c21d85268a25a690db7" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.066393 4958 scope.go:117] "RemoveContainer" containerID="2dda4ec602a66c837e0ea09c0e3f61d48133fb86251aa643c29f18244f03beb2" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.101766 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data-custom\") pod \"ba5740e5-447f-4104-a07d-bc0e7092d962\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.101834 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-combined-ca-bundle\") pod \"ba5740e5-447f-4104-a07d-bc0e7092d962\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.101892 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkvw4\" (UniqueName: \"kubernetes.io/projected/ba5740e5-447f-4104-a07d-bc0e7092d962-kube-api-access-hkvw4\") pod \"ba5740e5-447f-4104-a07d-bc0e7092d962\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.102623 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5740e5-447f-4104-a07d-bc0e7092d962-logs\") pod \"ba5740e5-447f-4104-a07d-bc0e7092d962\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.103009 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5740e5-447f-4104-a07d-bc0e7092d962-logs" (OuterVolumeSpecName: "logs") pod "ba5740e5-447f-4104-a07d-bc0e7092d962" (UID: "ba5740e5-447f-4104-a07d-bc0e7092d962"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.103079 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data\") pod \"ba5740e5-447f-4104-a07d-bc0e7092d962\" (UID: \"ba5740e5-447f-4104-a07d-bc0e7092d962\") " Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.103766 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba5740e5-447f-4104-a07d-bc0e7092d962-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.107441 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ba5740e5-447f-4104-a07d-bc0e7092d962" (UID: "ba5740e5-447f-4104-a07d-bc0e7092d962"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.107447 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5740e5-447f-4104-a07d-bc0e7092d962-kube-api-access-hkvw4" (OuterVolumeSpecName: "kube-api-access-hkvw4") pod "ba5740e5-447f-4104-a07d-bc0e7092d962" (UID: "ba5740e5-447f-4104-a07d-bc0e7092d962"). InnerVolumeSpecName "kube-api-access-hkvw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.133128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba5740e5-447f-4104-a07d-bc0e7092d962" (UID: "ba5740e5-447f-4104-a07d-bc0e7092d962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.154128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data" (OuterVolumeSpecName: "config-data") pod "ba5740e5-447f-4104-a07d-bc0e7092d962" (UID: "ba5740e5-447f-4104-a07d-bc0e7092d962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.205408 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.205458 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkvw4\" (UniqueName: \"kubernetes.io/projected/ba5740e5-447f-4104-a07d-bc0e7092d962-kube-api-access-hkvw4\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.205469 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.205477 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba5740e5-447f-4104-a07d-bc0e7092d962-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.588218 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607fcdde-da07-446c-875b-946ab8ec617e" path="/var/lib/kubelet/pods/607fcdde-da07-446c-875b-946ab8ec617e/volumes" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.588870 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" path="/var/lib/kubelet/pods/db18904a-8657-440d-ad39-60d3bb7907c3/volumes" Oct 08 06:53:35 crc kubenswrapper[4958]: I1008 06:53:35.931463 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67c44c8db8-5fbxd" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.129363 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-67c44c8db8-5fbxd"] Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.146376 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-67c44c8db8-5fbxd"] Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.513315 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.632661 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88w6n\" (UniqueName: \"kubernetes.io/projected/6f6f8889-0d97-4303-a8b7-e0845c79fecf-kube-api-access-88w6n\") pod \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.632744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-scripts\") pod \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.633755 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-combined-ca-bundle\") pod \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.633839 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f6f8889-0d97-4303-a8b7-e0845c79fecf-etc-machine-id\") pod \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.633920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data-custom\") pod \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.634006 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f6f8889-0d97-4303-a8b7-e0845c79fecf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f6f8889-0d97-4303-a8b7-e0845c79fecf" (UID: "6f6f8889-0d97-4303-a8b7-e0845c79fecf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.634067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data\") pod \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\" (UID: \"6f6f8889-0d97-4303-a8b7-e0845c79fecf\") " Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.634778 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f6f8889-0d97-4303-a8b7-e0845c79fecf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.639066 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f6f8889-0d97-4303-a8b7-e0845c79fecf" (UID: "6f6f8889-0d97-4303-a8b7-e0845c79fecf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.639467 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-scripts" (OuterVolumeSpecName: "scripts") pod "6f6f8889-0d97-4303-a8b7-e0845c79fecf" (UID: "6f6f8889-0d97-4303-a8b7-e0845c79fecf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.647283 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6f8889-0d97-4303-a8b7-e0845c79fecf-kube-api-access-88w6n" (OuterVolumeSpecName: "kube-api-access-88w6n") pod "6f6f8889-0d97-4303-a8b7-e0845c79fecf" (UID: "6f6f8889-0d97-4303-a8b7-e0845c79fecf"). InnerVolumeSpecName "kube-api-access-88w6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.704553 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f6f8889-0d97-4303-a8b7-e0845c79fecf" (UID: "6f6f8889-0d97-4303-a8b7-e0845c79fecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.737196 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.737235 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88w6n\" (UniqueName: \"kubernetes.io/projected/6f6f8889-0d97-4303-a8b7-e0845c79fecf-kube-api-access-88w6n\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.737252 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.737264 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.772100 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data" (OuterVolumeSpecName: "config-data") pod "6f6f8889-0d97-4303-a8b7-e0845c79fecf" (UID: "6f6f8889-0d97-4303-a8b7-e0845c79fecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.839411 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6f8889-0d97-4303-a8b7-e0845c79fecf-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.939269 4958 generic.go:334] "Generic (PLEG): container finished" podID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerID="c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87" exitCode=0 Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.939313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f6f8889-0d97-4303-a8b7-e0845c79fecf","Type":"ContainerDied","Data":"c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87"} Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.939313 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.939340 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f6f8889-0d97-4303-a8b7-e0845c79fecf","Type":"ContainerDied","Data":"9cbb2d969f219b45d318365a8f19e05c843e76c3c3767deb6d52f53869d4b631"} Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.939357 4958 scope.go:117] "RemoveContainer" containerID="afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.964747 4958 scope.go:117] "RemoveContainer" containerID="c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87" Oct 08 06:53:36 crc kubenswrapper[4958]: I1008 06:53:36.978372 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.006128 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.027893 4958 scope.go:117] "RemoveContainer" containerID="afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.032541 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc\": container with ID starting with afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc not found: ID does not exist" containerID="afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.032596 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc"} err="failed to get container status \"afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc\": rpc error: code = NotFound desc = could not find container \"afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc\": container with ID starting with afd00549d126f0d6dee7ba385e0de50d86cf163b0595797f031e1e3bfd1f07bc not found: ID does not exist" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.032629 4958 scope.go:117] "RemoveContainer" containerID="c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.035627 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87\": container with ID starting with c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87 not found: ID does not exist" containerID="c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.035672 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87"} err="failed to get container status \"c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87\": rpc error: code = NotFound desc = could not find container \"c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87\": container with ID starting with c6de18de91080d77e1563f975e44c048346b45a87542957a1e2a041293194f87 not found: ID does not exist" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.051582 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052092 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api-log" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052114 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api-log" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052137 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-api" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052146 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-api" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052159 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052165 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052174 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607fcdde-da07-446c-875b-946ab8ec617e" containerName="init" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052181 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="607fcdde-da07-446c-875b-946ab8ec617e" containerName="init" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052201 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-httpd" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052209 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-httpd" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052223 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607fcdde-da07-446c-875b-946ab8ec617e" containerName="dnsmasq-dns" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052231 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="607fcdde-da07-446c-875b-946ab8ec617e" containerName="dnsmasq-dns" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052249 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="probe" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052257 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="probe" Oct 08 06:53:37 crc kubenswrapper[4958]: E1008 06:53:37.052284 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="cinder-scheduler" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052292 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="cinder-scheduler" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052529 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="607fcdde-da07-446c-875b-946ab8ec617e" containerName="dnsmasq-dns" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052540 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-httpd" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052563 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052577 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="cinder-scheduler" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052587 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" containerName="probe" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052603 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" containerName="barbican-api-log" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.052612 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db18904a-8657-440d-ad39-60d3bb7907c3" containerName="neutron-api" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.053836 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.055675 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.057648 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.245658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.245740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-scripts\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.245890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6rm\" (UniqueName: \"kubernetes.io/projected/23d77fbe-4e70-428d-92b3-926fb7f5547e-kube-api-access-mq6rm\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.246220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.246277 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.246352 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23d77fbe-4e70-428d-92b3-926fb7f5547e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348224 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-scripts\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq6rm\" (UniqueName: \"kubernetes.io/projected/23d77fbe-4e70-428d-92b3-926fb7f5547e-kube-api-access-mq6rm\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348406 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23d77fbe-4e70-428d-92b3-926fb7f5547e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.348520 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23d77fbe-4e70-428d-92b3-926fb7f5547e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.351639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.351757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-scripts\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.354451 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.355017 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.373435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq6rm\" (UniqueName: \"kubernetes.io/projected/23d77fbe-4e70-428d-92b3-926fb7f5547e-kube-api-access-mq6rm\") pod \"cinder-scheduler-0\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.380005 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.605671 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6f8889-0d97-4303-a8b7-e0845c79fecf" path="/var/lib/kubelet/pods/6f6f8889-0d97-4303-a8b7-e0845c79fecf/volumes" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.608083 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5740e5-447f-4104-a07d-bc0e7092d962" path="/var/lib/kubelet/pods/ba5740e5-447f-4104-a07d-bc0e7092d962/volumes" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.853820 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.857908 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.858670 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:53:37 crc kubenswrapper[4958]: I1008 06:53:37.964627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23d77fbe-4e70-428d-92b3-926fb7f5547e","Type":"ContainerStarted","Data":"0bb5e4e05ddeaf8566136b433b64a215bc1eae669b7d8ff712733d9d7a635cff"} Oct 08 06:53:38 crc kubenswrapper[4958]: I1008 06:53:38.983083 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23d77fbe-4e70-428d-92b3-926fb7f5547e","Type":"ContainerStarted","Data":"c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc"} Oct 08 06:53:39 crc kubenswrapper[4958]: I1008 06:53:39.981466 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 08 06:53:39 crc kubenswrapper[4958]: I1008 06:53:39.994086 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23d77fbe-4e70-428d-92b3-926fb7f5547e","Type":"ContainerStarted","Data":"d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b"} Oct 08 06:53:40 crc kubenswrapper[4958]: I1008 06:53:40.050555 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.05053725 podStartE2EDuration="4.05053725s" podCreationTimestamp="2025-10-08 06:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:40.044159348 +0000 UTC m=+1163.173851949" watchObservedRunningTime="2025-10-08 06:53:40.05053725 +0000 UTC m=+1163.180229851" Oct 08 06:53:40 crc kubenswrapper[4958]: I1008 06:53:40.991933 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:53:42 crc kubenswrapper[4958]: I1008 06:53:42.381814 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.538178 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.540077 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.548685 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.548934 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.549669 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jpkwh" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.557121 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.716572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.716644 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrppm\" (UniqueName: \"kubernetes.io/projected/4b9c2255-61f9-4319-93b1-138600df6985-kube-api-access-xrppm\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.716741 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.716834 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.818857 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.818939 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.819023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.819049 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrppm\" (UniqueName: \"kubernetes.io/projected/4b9c2255-61f9-4319-93b1-138600df6985-kube-api-access-xrppm\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.819854 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.825578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.829569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.838965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrppm\" (UniqueName: \"kubernetes.io/projected/4b9c2255-61f9-4319-93b1-138600df6985-kube-api-access-xrppm\") pod \"openstackclient\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " pod="openstack/openstackclient" Oct 08 06:53:44 crc kubenswrapper[4958]: I1008 06:53:44.873243 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 06:53:45 crc kubenswrapper[4958]: I1008 06:53:45.367400 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 06:53:45 crc kubenswrapper[4958]: W1008 06:53:45.386999 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9c2255_61f9_4319_93b1_138600df6985.slice/crio-8d850371c13616560e292350ff69607abf6263593e0904d8ef81c2ec14393bdc WatchSource:0}: Error finding container 8d850371c13616560e292350ff69607abf6263593e0904d8ef81c2ec14393bdc: Status 404 returned error can't find the container with id 8d850371c13616560e292350ff69607abf6263593e0904d8ef81c2ec14393bdc Oct 08 06:53:46 crc kubenswrapper[4958]: I1008 06:53:46.056702 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4b9c2255-61f9-4319-93b1-138600df6985","Type":"ContainerStarted","Data":"8d850371c13616560e292350ff69607abf6263593e0904d8ef81c2ec14393bdc"} Oct 08 06:53:47 crc kubenswrapper[4958]: I1008 06:53:47.558625 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.378491 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-67799bdf69-qlb9s"] Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.380030 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.382003 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.382372 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.382553 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.394040 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67799bdf69-qlb9s"] Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500225 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-run-httpd\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-public-tls-certs\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-internal-tls-certs\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-log-httpd\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500433 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-combined-ca-bundle\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500456 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgsc8\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-kube-api-access-cgsc8\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500504 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-etc-swift\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.500531 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-config-data\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.601925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-run-httpd\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602011 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-public-tls-certs\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-internal-tls-certs\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602075 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-log-httpd\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-combined-ca-bundle\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgsc8\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-kube-api-access-cgsc8\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-etc-swift\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602228 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-config-data\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.602906 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-log-httpd\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.603475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-run-httpd\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.609838 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-etc-swift\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.610546 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-public-tls-certs\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.622980 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-internal-tls-certs\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.625182 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-config-data\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.627606 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgsc8\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-kube-api-access-cgsc8\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.635699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-combined-ca-bundle\") pod \"swift-proxy-67799bdf69-qlb9s\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:48 crc kubenswrapper[4958]: I1008 06:53:48.700508 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:49 crc kubenswrapper[4958]: I1008 06:53:49.244197 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-67799bdf69-qlb9s"] Oct 08 06:53:52 crc kubenswrapper[4958]: I1008 06:53:52.475757 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 06:53:53 crc kubenswrapper[4958]: I1008 06:53:53.875045 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:53 crc kubenswrapper[4958]: I1008 06:53:53.875620 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-central-agent" containerID="cri-o://ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654" gracePeriod=30 Oct 08 06:53:53 crc kubenswrapper[4958]: I1008 06:53:53.875702 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-notification-agent" containerID="cri-o://4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642" gracePeriod=30 Oct 08 06:53:53 crc kubenswrapper[4958]: I1008 06:53:53.875713 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="sg-core" containerID="cri-o://d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326" gracePeriod=30 Oct 08 06:53:53 crc kubenswrapper[4958]: I1008 06:53:53.875909 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="proxy-httpd" containerID="cri-o://a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312" gracePeriod=30 Oct 08 06:53:54 crc kubenswrapper[4958]: I1008 06:53:54.124051 4958 generic.go:334] "Generic (PLEG): container finished" podID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerID="a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312" exitCode=0 Oct 08 06:53:54 crc kubenswrapper[4958]: I1008 06:53:54.124079 4958 generic.go:334] "Generic (PLEG): container finished" podID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerID="d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326" exitCode=2 Oct 08 06:53:54 crc kubenswrapper[4958]: I1008 06:53:54.124115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerDied","Data":"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312"} Oct 08 06:53:54 crc kubenswrapper[4958]: I1008 06:53:54.124138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerDied","Data":"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326"} Oct 08 06:53:54 crc kubenswrapper[4958]: I1008 06:53:54.126692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67799bdf69-qlb9s" event={"ID":"1f688514-2336-4067-bb66-8bc690a2da30","Type":"ContainerStarted","Data":"5941f782c9b06eb97b840f6a5808ec5a17227c3ae5cdac4fd17a32653d9882ab"} Oct 08 06:53:54 crc kubenswrapper[4958]: I1008 06:53:54.958862 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.131735 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-combined-ca-bundle\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.131833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-config-data\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.131862 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-sg-core-conf-yaml\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.131923 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vxp8\" (UniqueName: \"kubernetes.io/projected/9c88f30e-9d32-47da-b268-9bd7321f197d-kube-api-access-8vxp8\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.132019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-run-httpd\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.132092 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-scripts\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.132150 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-log-httpd\") pod \"9c88f30e-9d32-47da-b268-9bd7321f197d\" (UID: \"9c88f30e-9d32-47da-b268-9bd7321f197d\") " Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.133418 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.134307 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138479 4958 generic.go:334] "Generic (PLEG): container finished" podID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerID="4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642" exitCode=0 Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138509 4958 generic.go:334] "Generic (PLEG): container finished" podID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerID="ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654" exitCode=0 Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138548 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerDied","Data":"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642"} Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerDied","Data":"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654"} Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c88f30e-9d32-47da-b268-9bd7321f197d","Type":"ContainerDied","Data":"1d6019f8fe60c99cc670b46cde580e94548bce4678020fed39b265acbba5be47"} Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138598 4958 scope.go:117] "RemoveContainer" containerID="a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.138735 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.140582 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-scripts" (OuterVolumeSpecName: "scripts") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.154158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c88f30e-9d32-47da-b268-9bd7321f197d-kube-api-access-8vxp8" (OuterVolumeSpecName: "kube-api-access-8vxp8") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "kube-api-access-8vxp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.154164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67799bdf69-qlb9s" event={"ID":"1f688514-2336-4067-bb66-8bc690a2da30","Type":"ContainerStarted","Data":"6a833e161a2028e949de920ad2fcf2e95b29dc9406b67da751b879f857c083e8"} Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.154230 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67799bdf69-qlb9s" event={"ID":"1f688514-2336-4067-bb66-8bc690a2da30","Type":"ContainerStarted","Data":"20c3267044d4ea08ed1d508a80b0aeb04cc872da56823319bbdec02266bd4cf3"} Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.154278 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.154302 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.159657 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4b9c2255-61f9-4319-93b1-138600df6985","Type":"ContainerStarted","Data":"bcbeedc0e126fa86f86f6a65d8d01563364649a3df6ebc1fda56bb28dd49d272"} Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.185157 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-67799bdf69-qlb9s" podStartSLOduration=7.185139069 podStartE2EDuration="7.185139069s" podCreationTimestamp="2025-10-08 06:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:53:55.174479512 +0000 UTC m=+1178.304172113" watchObservedRunningTime="2025-10-08 06:53:55.185139069 +0000 UTC m=+1178.314831670" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.202022 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.238290 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.238325 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.238337 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c88f30e-9d32-47da-b268-9bd7321f197d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.238372 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.238382 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vxp8\" (UniqueName: \"kubernetes.io/projected/9c88f30e-9d32-47da-b268-9bd7321f197d-kube-api-access-8vxp8\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.248732 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.258321 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-config-data" (OuterVolumeSpecName: "config-data") pod "9c88f30e-9d32-47da-b268-9bd7321f197d" (UID: "9c88f30e-9d32-47da-b268-9bd7321f197d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.315577 4958 scope.go:117] "RemoveContainer" containerID="d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.336881 4958 scope.go:117] "RemoveContainer" containerID="4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.340305 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.340335 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c88f30e-9d32-47da-b268-9bd7321f197d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.359960 4958 scope.go:117] "RemoveContainer" containerID="ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.387141 4958 scope.go:117] "RemoveContainer" containerID="a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.387507 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312\": container with ID starting with a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312 not found: ID does not exist" containerID="a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.387549 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312"} err="failed to get container status \"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312\": rpc error: code = NotFound desc = could not find container \"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312\": container with ID starting with a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.387574 4958 scope.go:117] "RemoveContainer" containerID="d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.387878 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326\": container with ID starting with d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326 not found: ID does not exist" containerID="d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.387913 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326"} err="failed to get container status \"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326\": rpc error: code = NotFound desc = could not find container \"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326\": container with ID starting with d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.387937 4958 scope.go:117] "RemoveContainer" containerID="4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.388155 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642\": container with ID starting with 4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642 not found: ID does not exist" containerID="4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388176 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642"} err="failed to get container status \"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642\": rpc error: code = NotFound desc = could not find container \"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642\": container with ID starting with 4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388190 4958 scope.go:117] "RemoveContainer" containerID="ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.388401 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654\": container with ID starting with ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654 not found: ID does not exist" containerID="ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388428 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654"} err="failed to get container status \"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654\": rpc error: code = NotFound desc = could not find container \"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654\": container with ID starting with ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388446 4958 scope.go:117] "RemoveContainer" containerID="a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388653 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312"} err="failed to get container status \"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312\": rpc error: code = NotFound desc = could not find container \"a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312\": container with ID starting with a90f10dcc61ca1b93bcbaa4d51760392c1215b085e156c58797c491ea1485312 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388677 4958 scope.go:117] "RemoveContainer" containerID="d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388886 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326"} err="failed to get container status \"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326\": rpc error: code = NotFound desc = could not find container \"d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326\": container with ID starting with d951e15920cf8f45c6285d36b7afdd989da4922b81a160d5104b8268c5001326 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.388936 4958 scope.go:117] "RemoveContainer" containerID="4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.389138 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642"} err="failed to get container status \"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642\": rpc error: code = NotFound desc = could not find container \"4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642\": container with ID starting with 4fe6085e922ff0205ff37735ed2f56194a2cc8d9f1b8f5ae4a60bde54bef2642 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.389155 4958 scope.go:117] "RemoveContainer" containerID="ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.389366 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654"} err="failed to get container status \"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654\": rpc error: code = NotFound desc = could not find container \"ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654\": container with ID starting with ed8bf3b181aa2ab13320dc23b0d5456fff2b55b59e5fbadbccd881bca4a19654 not found: ID does not exist" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.471994 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.712513732 podStartE2EDuration="11.471977341s" podCreationTimestamp="2025-10-08 06:53:44 +0000 UTC" firstStartedPulling="2025-10-08 06:53:45.389173096 +0000 UTC m=+1168.518865697" lastFinishedPulling="2025-10-08 06:53:54.148636695 +0000 UTC m=+1177.278329306" observedRunningTime="2025-10-08 06:53:55.191200383 +0000 UTC m=+1178.320892994" watchObservedRunningTime="2025-10-08 06:53:55.471977341 +0000 UTC m=+1178.601669932" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.473427 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.479432 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.505773 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.506200 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-notification-agent" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506218 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-notification-agent" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.506233 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-central-agent" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506239 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-central-agent" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.506266 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="sg-core" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506272 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="sg-core" Oct 08 06:53:55 crc kubenswrapper[4958]: E1008 06:53:55.506283 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="proxy-httpd" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506290 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="proxy-httpd" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506437 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="sg-core" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506454 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="proxy-httpd" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506471 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-central-agent" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.506483 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" containerName="ceilometer-notification-agent" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.508176 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.511135 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.511318 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.518223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.586654 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c88f30e-9d32-47da-b268-9bd7321f197d" path="/var/lib/kubelet/pods/9c88f30e-9d32-47da-b268-9bd7321f197d/volumes" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.644805 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q5nq\" (UniqueName: \"kubernetes.io/projected/9355824e-5e63-41fb-9542-6150ecb7ed63-kube-api-access-6q5nq\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.645554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-scripts\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.645830 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.645872 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-log-httpd\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.646083 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-config-data\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.646187 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-run-httpd\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.646205 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.747845 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-run-httpd\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748130 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748167 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q5nq\" (UniqueName: \"kubernetes.io/projected/9355824e-5e63-41fb-9542-6150ecb7ed63-kube-api-access-6q5nq\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748229 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-scripts\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-run-httpd\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748303 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-log-httpd\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-config-data\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.748763 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-log-httpd\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.751373 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-scripts\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.751789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.751800 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-config-data\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.752438 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.763263 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q5nq\" (UniqueName: \"kubernetes.io/projected/9355824e-5e63-41fb-9542-6150ecb7ed63-kube-api-access-6q5nq\") pod \"ceilometer-0\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " pod="openstack/ceilometer-0" Oct 08 06:53:55 crc kubenswrapper[4958]: I1008 06:53:55.828800 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:53:56 crc kubenswrapper[4958]: I1008 06:53:56.281691 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:56 crc kubenswrapper[4958]: I1008 06:53:56.312252 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:53:56 crc kubenswrapper[4958]: I1008 06:53:56.312456 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-log" containerID="cri-o://eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1" gracePeriod=30 Oct 08 06:53:56 crc kubenswrapper[4958]: I1008 06:53:56.312811 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-httpd" containerID="cri-o://a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df" gracePeriod=30 Oct 08 06:53:57 crc kubenswrapper[4958]: I1008 06:53:57.180852 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerStarted","Data":"7ea6fb28560b424eaadc922393d1fd72dc22b508ca5e3f319387fcc1090afd1f"} Oct 08 06:53:57 crc kubenswrapper[4958]: I1008 06:53:57.184284 4958 generic.go:334] "Generic (PLEG): container finished" podID="f10a7792-91b1-4de9-83de-17620cd80909" containerID="eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1" exitCode=143 Oct 08 06:53:57 crc kubenswrapper[4958]: I1008 06:53:57.184335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f10a7792-91b1-4de9-83de-17620cd80909","Type":"ContainerDied","Data":"eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1"} Oct 08 06:53:58 crc kubenswrapper[4958]: I1008 06:53:58.202394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerStarted","Data":"f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9"} Oct 08 06:53:58 crc kubenswrapper[4958]: I1008 06:53:58.202988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerStarted","Data":"e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10"} Oct 08 06:53:58 crc kubenswrapper[4958]: I1008 06:53:58.431413 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:53:59 crc kubenswrapper[4958]: I1008 06:53:59.221059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerStarted","Data":"3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77"} Oct 08 06:53:59 crc kubenswrapper[4958]: I1008 06:53:59.277725 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:53:59 crc kubenswrapper[4958]: I1008 06:53:59.277973 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-log" containerID="cri-o://fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0" gracePeriod=30 Oct 08 06:53:59 crc kubenswrapper[4958]: I1008 06:53:59.278127 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-httpd" containerID="cri-o://b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77" gracePeriod=30 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.186479 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.244312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerStarted","Data":"8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655"} Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.244474 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-central-agent" containerID="cri-o://e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10" gracePeriod=30 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.244514 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.244629 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="proxy-httpd" containerID="cri-o://8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655" gracePeriod=30 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.244697 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-notification-agent" containerID="cri-o://f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9" gracePeriod=30 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.244743 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="sg-core" containerID="cri-o://3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77" gracePeriod=30 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.248312 4958 generic.go:334] "Generic (PLEG): container finished" podID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerID="fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0" exitCode=143 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.248391 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3134ca44-2587-4cc5-8931-fbd4f9129411","Type":"ContainerDied","Data":"fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0"} Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.259746 4958 generic.go:334] "Generic (PLEG): container finished" podID="f10a7792-91b1-4de9-83de-17620cd80909" containerID="a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df" exitCode=0 Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.259789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f10a7792-91b1-4de9-83de-17620cd80909","Type":"ContainerDied","Data":"a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df"} Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.259805 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.259824 4958 scope.go:117] "RemoveContainer" containerID="a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.259813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f10a7792-91b1-4de9-83de-17620cd80909","Type":"ContainerDied","Data":"2781cdb8909020985403459f3bcb61d17d2be7c2704754fa891857633e804e50"} Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.275329 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.733821876 podStartE2EDuration="5.275312658s" podCreationTimestamp="2025-10-08 06:53:55 +0000 UTC" firstStartedPulling="2025-10-08 06:53:56.298177239 +0000 UTC m=+1179.427869840" lastFinishedPulling="2025-10-08 06:53:59.839668031 +0000 UTC m=+1182.969360622" observedRunningTime="2025-10-08 06:54:00.272251296 +0000 UTC m=+1183.401943897" watchObservedRunningTime="2025-10-08 06:54:00.275312658 +0000 UTC m=+1183.405005259" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.290011 4958 scope.go:117] "RemoveContainer" containerID="eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.318132 4958 scope.go:117] "RemoveContainer" containerID="a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df" Oct 08 06:54:00 crc kubenswrapper[4958]: E1008 06:54:00.318552 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df\": container with ID starting with a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df not found: ID does not exist" containerID="a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.318583 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df"} err="failed to get container status \"a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df\": rpc error: code = NotFound desc = could not find container \"a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df\": container with ID starting with a5aef7462a0702fb6dc978eeb4e6bf4bd9c3adc532801d1fc0376544330017df not found: ID does not exist" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.318608 4958 scope.go:117] "RemoveContainer" containerID="eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1" Oct 08 06:54:00 crc kubenswrapper[4958]: E1008 06:54:00.319091 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1\": container with ID starting with eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1 not found: ID does not exist" containerID="eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.319131 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1"} err="failed to get container status \"eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1\": rpc error: code = NotFound desc = could not find container \"eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1\": container with ID starting with eadade12ebdf006b64d12a511327e82f850ce005e67ef0d84378dd67275dfcf1 not found: ID does not exist" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337022 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-public-tls-certs\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337100 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwt6v\" (UniqueName: \"kubernetes.io/projected/f10a7792-91b1-4de9-83de-17620cd80909-kube-api-access-rwt6v\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337128 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-httpd-run\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337184 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-combined-ca-bundle\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337221 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-logs\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337272 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-config-data\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337294 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-scripts\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f10a7792-91b1-4de9-83de-17620cd80909\" (UID: \"f10a7792-91b1-4de9-83de-17620cd80909\") " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.337741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-logs" (OuterVolumeSpecName: "logs") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.343038 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-scripts" (OuterVolumeSpecName: "scripts") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.343054 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.343136 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10a7792-91b1-4de9-83de-17620cd80909-kube-api-access-rwt6v" (OuterVolumeSpecName: "kube-api-access-rwt6v") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "kube-api-access-rwt6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.363920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.383443 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.390031 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-config-data" (OuterVolumeSpecName: "config-data") pod "f10a7792-91b1-4de9-83de-17620cd80909" (UID: "f10a7792-91b1-4de9-83de-17620cd80909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.438918 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.438988 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwt6v\" (UniqueName: \"kubernetes.io/projected/f10a7792-91b1-4de9-83de-17620cd80909-kube-api-access-rwt6v\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.439002 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.439010 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.439019 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f10a7792-91b1-4de9-83de-17620cd80909-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.439029 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.439036 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f10a7792-91b1-4de9-83de-17620cd80909-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.439069 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.457192 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.541139 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.606761 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.631165 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.646982 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:54:00 crc kubenswrapper[4958]: E1008 06:54:00.647321 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-httpd" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.647334 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-httpd" Oct 08 06:54:00 crc kubenswrapper[4958]: E1008 06:54:00.647376 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-log" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.647382 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-log" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.647563 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-log" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.647578 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10a7792-91b1-4de9-83de-17620cd80909" containerName="glance-httpd" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.648531 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.654314 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.655014 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.657328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848177 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848218 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848252 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2cl\" (UniqueName: \"kubernetes.io/projected/07431828-0ed3-42a8-9c9c-fdcdb98c854b-kube-api-access-ll2cl\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848325 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-logs\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848359 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.848428 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949377 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949656 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949701 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949731 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949755 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2cl\" (UniqueName: \"kubernetes.io/projected/07431828-0ed3-42a8-9c9c-fdcdb98c854b-kube-api-access-ll2cl\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949810 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-logs\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949834 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.949877 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.950126 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.950313 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.950521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-logs\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.964625 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.964905 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.965447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.968169 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.968644 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2cl\" (UniqueName: \"kubernetes.io/projected/07431828-0ed3-42a8-9c9c-fdcdb98c854b-kube-api-access-ll2cl\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:00 crc kubenswrapper[4958]: I1008 06:54:00.986288 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " pod="openstack/glance-default-external-api-0" Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.274859 4958 generic.go:334] "Generic (PLEG): container finished" podID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerID="8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655" exitCode=0 Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.274902 4958 generic.go:334] "Generic (PLEG): container finished" podID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerID="3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77" exitCode=2 Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.274913 4958 generic.go:334] "Generic (PLEG): container finished" podID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerID="f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9" exitCode=0 Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.274936 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerDied","Data":"8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655"} Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.274984 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerDied","Data":"3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77"} Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.274997 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerDied","Data":"f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9"} Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.280623 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.590838 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10a7792-91b1-4de9-83de-17620cd80909" path="/var/lib/kubelet/pods/f10a7792-91b1-4de9-83de-17620cd80909/volumes" Oct 08 06:54:01 crc kubenswrapper[4958]: I1008 06:54:01.665593 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:54:01 crc kubenswrapper[4958]: W1008 06:54:01.676123 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07431828_0ed3_42a8_9c9c_fdcdb98c854b.slice/crio-8f79829d96326f7bee84b7b65553b5f2f299f87cce7df41f27f19b7a751a63c3 WatchSource:0}: Error finding container 8f79829d96326f7bee84b7b65553b5f2f299f87cce7df41f27f19b7a751a63c3: Status 404 returned error can't find the container with id 8f79829d96326f7bee84b7b65553b5f2f299f87cce7df41f27f19b7a751a63c3 Oct 08 06:54:02 crc kubenswrapper[4958]: I1008 06:54:02.293436 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07431828-0ed3-42a8-9c9c-fdcdb98c854b","Type":"ContainerStarted","Data":"8880871fe8295b852d41b427ce6abdf7eaf0a061647089a2c0dbf322aafca26b"} Oct 08 06:54:02 crc kubenswrapper[4958]: I1008 06:54:02.293747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07431828-0ed3-42a8-9c9c-fdcdb98c854b","Type":"ContainerStarted","Data":"8f79829d96326f7bee84b7b65553b5f2f299f87cce7df41f27f19b7a751a63c3"} Oct 08 06:54:02 crc kubenswrapper[4958]: I1008 06:54:02.936430 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085415 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-httpd-run\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085494 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-internal-tls-certs\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085515 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-config-data\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-logs\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085682 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-combined-ca-bundle\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085732 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085758 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-scripts\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085793 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xznwb\" (UniqueName: \"kubernetes.io/projected/3134ca44-2587-4cc5-8931-fbd4f9129411-kube-api-access-xznwb\") pod \"3134ca44-2587-4cc5-8931-fbd4f9129411\" (UID: \"3134ca44-2587-4cc5-8931-fbd4f9129411\") " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.085967 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.086487 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.087692 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-logs" (OuterVolumeSpecName: "logs") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.092415 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3134ca44-2587-4cc5-8931-fbd4f9129411-kube-api-access-xznwb" (OuterVolumeSpecName: "kube-api-access-xznwb") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "kube-api-access-xznwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.092459 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-scripts" (OuterVolumeSpecName: "scripts") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.094018 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.122124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.135119 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.158973 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-config-data" (OuterVolumeSpecName: "config-data") pod "3134ca44-2587-4cc5-8931-fbd4f9129411" (UID: "3134ca44-2587-4cc5-8931-fbd4f9129411"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188357 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188600 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188672 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188726 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xznwb\" (UniqueName: \"kubernetes.io/projected/3134ca44-2587-4cc5-8931-fbd4f9129411-kube-api-access-xznwb\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188780 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188840 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3134ca44-2587-4cc5-8931-fbd4f9129411-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.188900 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3134ca44-2587-4cc5-8931-fbd4f9129411-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.207561 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.290296 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.304031 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07431828-0ed3-42a8-9c9c-fdcdb98c854b","Type":"ContainerStarted","Data":"5a4d814284dc7005c70da55ad6372b1b9b0faa25b4caef46944a4c4b825edf42"} Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.307178 4958 generic.go:334] "Generic (PLEG): container finished" podID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerID="b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77" exitCode=0 Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.307216 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3134ca44-2587-4cc5-8931-fbd4f9129411","Type":"ContainerDied","Data":"b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77"} Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.307239 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3134ca44-2587-4cc5-8931-fbd4f9129411","Type":"ContainerDied","Data":"492b3765871fd7f6fb4a76c63146710aa5c533e8ba1c90f6d73eda15a73670f3"} Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.307261 4958 scope.go:117] "RemoveContainer" containerID="b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.307399 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.333129 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.333110755 podStartE2EDuration="3.333110755s" podCreationTimestamp="2025-10-08 06:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:54:03.332311064 +0000 UTC m=+1186.462003665" watchObservedRunningTime="2025-10-08 06:54:03.333110755 +0000 UTC m=+1186.462803356" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.339679 4958 scope.go:117] "RemoveContainer" containerID="fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.353263 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.360926 4958 scope.go:117] "RemoveContainer" containerID="b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77" Oct 08 06:54:03 crc kubenswrapper[4958]: E1008 06:54:03.362510 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77\": container with ID starting with b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77 not found: ID does not exist" containerID="b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.362551 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77"} err="failed to get container status \"b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77\": rpc error: code = NotFound desc = could not find container \"b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77\": container with ID starting with b008e02909330fc4187b510083262dbb386eb71d958a9a495fb0a824a82cba77 not found: ID does not exist" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.362579 4958 scope.go:117] "RemoveContainer" containerID="fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0" Oct 08 06:54:03 crc kubenswrapper[4958]: E1008 06:54:03.362827 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0\": container with ID starting with fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0 not found: ID does not exist" containerID="fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.362855 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0"} err="failed to get container status \"fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0\": rpc error: code = NotFound desc = could not find container \"fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0\": container with ID starting with fd3912b976dde4e29cd1eece2508c364d85978bb23a44acd39ed7f7f6248fbf0 not found: ID does not exist" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.371035 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.379551 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:54:03 crc kubenswrapper[4958]: E1008 06:54:03.380157 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-httpd" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.380180 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-httpd" Oct 08 06:54:03 crc kubenswrapper[4958]: E1008 06:54:03.380211 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-log" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.380223 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-log" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.380538 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-httpd" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.380597 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" containerName="glance-log" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.382156 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.384876 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.385070 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.388257 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494071 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494295 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494467 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-logs\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqct4\" (UniqueName: \"kubernetes.io/projected/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-kube-api-access-zqct4\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494812 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.494888 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.600189 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.600249 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.600307 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.600449 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.601138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-logs\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.601176 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.601148 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.602141 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqct4\" (UniqueName: \"kubernetes.io/projected/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-kube-api-access-zqct4\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.602189 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.602211 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.602404 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-logs\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.606913 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.607344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.607607 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3134ca44-2587-4cc5-8931-fbd4f9129411" path="/var/lib/kubelet/pods/3134ca44-2587-4cc5-8931-fbd4f9129411/volumes" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.609067 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.620483 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.622519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqct4\" (UniqueName: \"kubernetes.io/projected/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-kube-api-access-zqct4\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.652505 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.709665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.722017 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.726295 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:54:03 crc kubenswrapper[4958]: I1008 06:54:03.834633 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.016623 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q5nq\" (UniqueName: \"kubernetes.io/projected/9355824e-5e63-41fb-9542-6150ecb7ed63-kube-api-access-6q5nq\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.016915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-scripts\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.017002 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-run-httpd\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.017084 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-sg-core-conf-yaml\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.017107 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-config-data\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.017169 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-combined-ca-bundle\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.017227 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-log-httpd\") pod \"9355824e-5e63-41fb-9542-6150ecb7ed63\" (UID: \"9355824e-5e63-41fb-9542-6150ecb7ed63\") " Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.017935 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.019940 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.039653 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-scripts" (OuterVolumeSpecName: "scripts") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.040695 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9355824e-5e63-41fb-9542-6150ecb7ed63-kube-api-access-6q5nq" (OuterVolumeSpecName: "kube-api-access-6q5nq") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "kube-api-access-6q5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.060132 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.114380 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-config-data" (OuterVolumeSpecName: "config-data") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.115739 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9355824e-5e63-41fb-9542-6150ecb7ed63" (UID: "9355824e-5e63-41fb-9542-6150ecb7ed63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118803 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118832 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118844 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118852 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118860 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9355824e-5e63-41fb-9542-6150ecb7ed63-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118869 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q5nq\" (UniqueName: \"kubernetes.io/projected/9355824e-5e63-41fb-9542-6150ecb7ed63-kube-api-access-6q5nq\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.118876 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9355824e-5e63-41fb-9542-6150ecb7ed63-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.320745 4958 generic.go:334] "Generic (PLEG): container finished" podID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerID="e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10" exitCode=0 Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.320860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerDied","Data":"e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10"} Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.320899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9355824e-5e63-41fb-9542-6150ecb7ed63","Type":"ContainerDied","Data":"7ea6fb28560b424eaadc922393d1fd72dc22b508ca5e3f319387fcc1090afd1f"} Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.320929 4958 scope.go:117] "RemoveContainer" containerID="8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.321158 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.365166 4958 scope.go:117] "RemoveContainer" containerID="3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.372797 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.384935 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.389107 4958 scope.go:117] "RemoveContainer" containerID="f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.408081 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420086 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.420559 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-central-agent" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420582 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-central-agent" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.420603 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="proxy-httpd" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420614 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="proxy-httpd" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.420627 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-notification-agent" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420635 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-notification-agent" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.420673 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="sg-core" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420684 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="sg-core" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420915 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="sg-core" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420961 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-notification-agent" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420978 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="proxy-httpd" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.420993 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" containerName="ceilometer-central-agent" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.422997 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.433142 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.433351 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.434923 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.494537 4958 scope.go:117] "RemoveContainer" containerID="e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.519635 4958 scope.go:117] "RemoveContainer" containerID="8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.520029 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655\": container with ID starting with 8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655 not found: ID does not exist" containerID="8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520060 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655"} err="failed to get container status \"8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655\": rpc error: code = NotFound desc = could not find container \"8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655\": container with ID starting with 8e680f6662bcc43fd54e46a7b085991a380600b4dbaccebf605222b27e75f655 not found: ID does not exist" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520079 4958 scope.go:117] "RemoveContainer" containerID="3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.520404 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77\": container with ID starting with 3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77 not found: ID does not exist" containerID="3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520454 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77"} err="failed to get container status \"3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77\": rpc error: code = NotFound desc = could not find container \"3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77\": container with ID starting with 3da56d55ae11baa1b05e346de961241314bf41b0a10b0062a3e56eed37bb7a77 not found: ID does not exist" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520467 4958 scope.go:117] "RemoveContainer" containerID="f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.520672 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9\": container with ID starting with f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9 not found: ID does not exist" containerID="f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520693 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9"} err="failed to get container status \"f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9\": rpc error: code = NotFound desc = could not find container \"f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9\": container with ID starting with f7968b11d70f9ff5841835856bbd45b272a8f57f9adc844ffb3a4a8981882fc9 not found: ID does not exist" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520705 4958 scope.go:117] "RemoveContainer" containerID="e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10" Oct 08 06:54:04 crc kubenswrapper[4958]: E1008 06:54:04.520909 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10\": container with ID starting with e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10 not found: ID does not exist" containerID="e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.520931 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10"} err="failed to get container status \"e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10\": rpc error: code = NotFound desc = could not find container \"e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10\": container with ID starting with e39d90003e94ec6b42a1d32f929b44c222e93d8a73df0009d9c967353a49ba10 not found: ID does not exist" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529261 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-scripts\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529367 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkwk\" (UniqueName: \"kubernetes.io/projected/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-kube-api-access-dbkwk\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529427 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-config-data\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.529469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.630936 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.631249 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.631271 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.631306 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-scripts\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.631327 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkwk\" (UniqueName: \"kubernetes.io/projected/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-kube-api-access-dbkwk\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.631348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.631387 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-config-data\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.632462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.632497 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.635211 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.635226 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-scripts\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.636424 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-config-data\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.636744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.648346 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkwk\" (UniqueName: \"kubernetes.io/projected/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-kube-api-access-dbkwk\") pod \"ceilometer-0\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " pod="openstack/ceilometer-0" Oct 08 06:54:04 crc kubenswrapper[4958]: I1008 06:54:04.779590 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:05 crc kubenswrapper[4958]: I1008 06:54:05.253542 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:05 crc kubenswrapper[4958]: W1008 06:54:05.262910 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cc94ec2_44b7_461b_bcea_691ea7b00ee5.slice/crio-da3bdea763e5c61d2c0ba4c15361840608f8e592b0f94f15e8ea4490b1ea32d9 WatchSource:0}: Error finding container da3bdea763e5c61d2c0ba4c15361840608f8e592b0f94f15e8ea4490b1ea32d9: Status 404 returned error can't find the container with id da3bdea763e5c61d2c0ba4c15361840608f8e592b0f94f15e8ea4490b1ea32d9 Oct 08 06:54:05 crc kubenswrapper[4958]: I1008 06:54:05.338996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"234cbd06-f8de-4d4f-a510-8dc7e5d9db93","Type":"ContainerStarted","Data":"d76598e60d3a3646bdbc321f562f55f8c8d51c2d903f6c39ad73774883cc7ec3"} Oct 08 06:54:05 crc kubenswrapper[4958]: I1008 06:54:05.339042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"234cbd06-f8de-4d4f-a510-8dc7e5d9db93","Type":"ContainerStarted","Data":"43338728ca294bc10d507b106937cc8f96e3ca8a8078ebaf05c25dab25560170"} Oct 08 06:54:05 crc kubenswrapper[4958]: I1008 06:54:05.343542 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerStarted","Data":"da3bdea763e5c61d2c0ba4c15361840608f8e592b0f94f15e8ea4490b1ea32d9"} Oct 08 06:54:05 crc kubenswrapper[4958]: I1008 06:54:05.597906 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9355824e-5e63-41fb-9542-6150ecb7ed63" path="/var/lib/kubelet/pods/9355824e-5e63-41fb-9542-6150ecb7ed63/volumes" Oct 08 06:54:06 crc kubenswrapper[4958]: I1008 06:54:06.357420 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerStarted","Data":"f7d3a2e24e36cb843175ea076666289a75ce06294357e1d7f9f67d7974e2b8a2"} Oct 08 06:54:06 crc kubenswrapper[4958]: I1008 06:54:06.360895 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"234cbd06-f8de-4d4f-a510-8dc7e5d9db93","Type":"ContainerStarted","Data":"39c3f98bf3ab5af00e017cc7b7b4cbeb1fef7cb815739f4da85c995009b67708"} Oct 08 06:54:06 crc kubenswrapper[4958]: I1008 06:54:06.394567 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.39454513 podStartE2EDuration="3.39454513s" podCreationTimestamp="2025-10-08 06:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:54:06.389608377 +0000 UTC m=+1189.519301038" watchObservedRunningTime="2025-10-08 06:54:06.39454513 +0000 UTC m=+1189.524237741" Oct 08 06:54:07 crc kubenswrapper[4958]: I1008 06:54:07.377369 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerStarted","Data":"3ad1bac1f4b3c727bc4312c1f9a444c3a8f983b58148ca3fc6e195b99513859a"} Oct 08 06:54:08 crc kubenswrapper[4958]: I1008 06:54:08.387842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerStarted","Data":"9fb3d91b7b2f5366677ef6e3ddcbd73638676fa5edbe579060028418b61d5a79"} Oct 08 06:54:08 crc kubenswrapper[4958]: I1008 06:54:08.881321 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g4wrk"] Oct 08 06:54:08 crc kubenswrapper[4958]: I1008 06:54:08.882326 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:08 crc kubenswrapper[4958]: I1008 06:54:08.892612 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g4wrk"] Oct 08 06:54:08 crc kubenswrapper[4958]: I1008 06:54:08.968595 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-z8kq2"] Oct 08 06:54:08 crc kubenswrapper[4958]: I1008 06:54:08.970827 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.013566 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z8kq2"] Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.034697 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jrg\" (UniqueName: \"kubernetes.io/projected/aac68cce-b443-4f57-89f7-6c7fee4fcd32-kube-api-access-d6jrg\") pod \"nova-api-db-create-g4wrk\" (UID: \"aac68cce-b443-4f57-89f7-6c7fee4fcd32\") " pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.082670 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mtqh6"] Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.083938 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.088604 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mtqh6"] Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.137616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jrg\" (UniqueName: \"kubernetes.io/projected/aac68cce-b443-4f57-89f7-6c7fee4fcd32-kube-api-access-d6jrg\") pod \"nova-api-db-create-g4wrk\" (UID: \"aac68cce-b443-4f57-89f7-6c7fee4fcd32\") " pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.137881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbrn\" (UniqueName: \"kubernetes.io/projected/dd836d54-e3a7-45f2-a3ec-21b73cc38496-kube-api-access-2hbrn\") pod \"nova-cell0-db-create-z8kq2\" (UID: \"dd836d54-e3a7-45f2-a3ec-21b73cc38496\") " pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.156266 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jrg\" (UniqueName: \"kubernetes.io/projected/aac68cce-b443-4f57-89f7-6c7fee4fcd32-kube-api-access-d6jrg\") pod \"nova-api-db-create-g4wrk\" (UID: \"aac68cce-b443-4f57-89f7-6c7fee4fcd32\") " pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.195057 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.240716 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ssm2\" (UniqueName: \"kubernetes.io/projected/4b8250e7-12f5-45b5-a5c3-2fbc770df268-kube-api-access-8ssm2\") pod \"nova-cell1-db-create-mtqh6\" (UID: \"4b8250e7-12f5-45b5-a5c3-2fbc770df268\") " pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.240774 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbrn\" (UniqueName: \"kubernetes.io/projected/dd836d54-e3a7-45f2-a3ec-21b73cc38496-kube-api-access-2hbrn\") pod \"nova-cell0-db-create-z8kq2\" (UID: \"dd836d54-e3a7-45f2-a3ec-21b73cc38496\") " pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.257367 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbrn\" (UniqueName: \"kubernetes.io/projected/dd836d54-e3a7-45f2-a3ec-21b73cc38496-kube-api-access-2hbrn\") pod \"nova-cell0-db-create-z8kq2\" (UID: \"dd836d54-e3a7-45f2-a3ec-21b73cc38496\") " pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.314633 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.342678 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ssm2\" (UniqueName: \"kubernetes.io/projected/4b8250e7-12f5-45b5-a5c3-2fbc770df268-kube-api-access-8ssm2\") pod \"nova-cell1-db-create-mtqh6\" (UID: \"4b8250e7-12f5-45b5-a5c3-2fbc770df268\") " pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.365706 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ssm2\" (UniqueName: \"kubernetes.io/projected/4b8250e7-12f5-45b5-a5c3-2fbc770df268-kube-api-access-8ssm2\") pod \"nova-cell1-db-create-mtqh6\" (UID: \"4b8250e7-12f5-45b5-a5c3-2fbc770df268\") " pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.413651 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.449148 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerStarted","Data":"85772ea9d58e1f9499c1692b5187586c59aae8867c70dc4043d9d818fc5d27c5"} Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.450504 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.466191 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g4wrk"] Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.910927 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.662627696 podStartE2EDuration="5.910909484s" podCreationTimestamp="2025-10-08 06:54:04 +0000 UTC" firstStartedPulling="2025-10-08 06:54:05.26555931 +0000 UTC m=+1188.395251911" lastFinishedPulling="2025-10-08 06:54:08.513841098 +0000 UTC m=+1191.643533699" observedRunningTime="2025-10-08 06:54:09.499462629 +0000 UTC m=+1192.629155230" watchObservedRunningTime="2025-10-08 06:54:09.910909484 +0000 UTC m=+1193.040602075" Oct 08 06:54:09 crc kubenswrapper[4958]: I1008 06:54:09.919362 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z8kq2"] Oct 08 06:54:09 crc kubenswrapper[4958]: W1008 06:54:09.921233 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd836d54_e3a7_45f2_a3ec_21b73cc38496.slice/crio-d7b039f10abed0a1604652f349404458365f5f46b3cf7d0f66ab7839030976d2 WatchSource:0}: Error finding container d7b039f10abed0a1604652f349404458365f5f46b3cf7d0f66ab7839030976d2: Status 404 returned error can't find the container with id d7b039f10abed0a1604652f349404458365f5f46b3cf7d0f66ab7839030976d2 Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.057380 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mtqh6"] Oct 08 06:54:10 crc kubenswrapper[4958]: W1008 06:54:10.062135 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b8250e7_12f5_45b5_a5c3_2fbc770df268.slice/crio-800764e826b0092db2af3fb1bb2bf1bbb5edaa8490420cc79531403eb4539849 WatchSource:0}: Error finding container 800764e826b0092db2af3fb1bb2bf1bbb5edaa8490420cc79531403eb4539849: Status 404 returned error can't find the container with id 800764e826b0092db2af3fb1bb2bf1bbb5edaa8490420cc79531403eb4539849 Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.463324 4958 generic.go:334] "Generic (PLEG): container finished" podID="aac68cce-b443-4f57-89f7-6c7fee4fcd32" containerID="311e69da329a81b4b20d7020272df4e6a19f16a07d1eae5dbd73b8bf3a8f4450" exitCode=0 Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.463522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g4wrk" event={"ID":"aac68cce-b443-4f57-89f7-6c7fee4fcd32","Type":"ContainerDied","Data":"311e69da329a81b4b20d7020272df4e6a19f16a07d1eae5dbd73b8bf3a8f4450"} Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.464833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g4wrk" event={"ID":"aac68cce-b443-4f57-89f7-6c7fee4fcd32","Type":"ContainerStarted","Data":"5194da0fe241fc1f069a31c37a75a108d0e0546078d626dd0fc251de1dce7b7f"} Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.468929 4958 generic.go:334] "Generic (PLEG): container finished" podID="dd836d54-e3a7-45f2-a3ec-21b73cc38496" containerID="d1f6b1da427fff30761b599d2a730ce90f82550f72826fba714f3fa780f61642" exitCode=0 Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.469031 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z8kq2" event={"ID":"dd836d54-e3a7-45f2-a3ec-21b73cc38496","Type":"ContainerDied","Data":"d1f6b1da427fff30761b599d2a730ce90f82550f72826fba714f3fa780f61642"} Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.469100 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z8kq2" event={"ID":"dd836d54-e3a7-45f2-a3ec-21b73cc38496","Type":"ContainerStarted","Data":"d7b039f10abed0a1604652f349404458365f5f46b3cf7d0f66ab7839030976d2"} Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.471098 4958 generic.go:334] "Generic (PLEG): container finished" podID="4b8250e7-12f5-45b5-a5c3-2fbc770df268" containerID="12d6d9a9b735ad1f2113d0bdb61dd29cf346b3ee0134e7b01562c403624ea09d" exitCode=0 Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.471218 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mtqh6" event={"ID":"4b8250e7-12f5-45b5-a5c3-2fbc770df268","Type":"ContainerDied","Data":"12d6d9a9b735ad1f2113d0bdb61dd29cf346b3ee0134e7b01562c403624ea09d"} Oct 08 06:54:10 crc kubenswrapper[4958]: I1008 06:54:10.471345 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mtqh6" event={"ID":"4b8250e7-12f5-45b5-a5c3-2fbc770df268","Type":"ContainerStarted","Data":"800764e826b0092db2af3fb1bb2bf1bbb5edaa8490420cc79531403eb4539849"} Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.281139 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.281512 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.322216 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.341124 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.482734 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.482763 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 06:54:11 crc kubenswrapper[4958]: I1008 06:54:11.925547 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.094191 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.101079 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.119783 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jrg\" (UniqueName: \"kubernetes.io/projected/aac68cce-b443-4f57-89f7-6c7fee4fcd32-kube-api-access-d6jrg\") pod \"aac68cce-b443-4f57-89f7-6c7fee4fcd32\" (UID: \"aac68cce-b443-4f57-89f7-6c7fee4fcd32\") " Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.126892 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac68cce-b443-4f57-89f7-6c7fee4fcd32-kube-api-access-d6jrg" (OuterVolumeSpecName: "kube-api-access-d6jrg") pod "aac68cce-b443-4f57-89f7-6c7fee4fcd32" (UID: "aac68cce-b443-4f57-89f7-6c7fee4fcd32"). InnerVolumeSpecName "kube-api-access-d6jrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.221624 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ssm2\" (UniqueName: \"kubernetes.io/projected/4b8250e7-12f5-45b5-a5c3-2fbc770df268-kube-api-access-8ssm2\") pod \"4b8250e7-12f5-45b5-a5c3-2fbc770df268\" (UID: \"4b8250e7-12f5-45b5-a5c3-2fbc770df268\") " Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.221772 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbrn\" (UniqueName: \"kubernetes.io/projected/dd836d54-e3a7-45f2-a3ec-21b73cc38496-kube-api-access-2hbrn\") pod \"dd836d54-e3a7-45f2-a3ec-21b73cc38496\" (UID: \"dd836d54-e3a7-45f2-a3ec-21b73cc38496\") " Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.222186 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jrg\" (UniqueName: \"kubernetes.io/projected/aac68cce-b443-4f57-89f7-6c7fee4fcd32-kube-api-access-d6jrg\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.224636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8250e7-12f5-45b5-a5c3-2fbc770df268-kube-api-access-8ssm2" (OuterVolumeSpecName: "kube-api-access-8ssm2") pod "4b8250e7-12f5-45b5-a5c3-2fbc770df268" (UID: "4b8250e7-12f5-45b5-a5c3-2fbc770df268"). InnerVolumeSpecName "kube-api-access-8ssm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.224994 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd836d54-e3a7-45f2-a3ec-21b73cc38496-kube-api-access-2hbrn" (OuterVolumeSpecName: "kube-api-access-2hbrn") pod "dd836d54-e3a7-45f2-a3ec-21b73cc38496" (UID: "dd836d54-e3a7-45f2-a3ec-21b73cc38496"). InnerVolumeSpecName "kube-api-access-2hbrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.323890 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ssm2\" (UniqueName: \"kubernetes.io/projected/4b8250e7-12f5-45b5-a5c3-2fbc770df268-kube-api-access-8ssm2\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.323926 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hbrn\" (UniqueName: \"kubernetes.io/projected/dd836d54-e3a7-45f2-a3ec-21b73cc38496-kube-api-access-2hbrn\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.490067 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g4wrk" event={"ID":"aac68cce-b443-4f57-89f7-6c7fee4fcd32","Type":"ContainerDied","Data":"5194da0fe241fc1f069a31c37a75a108d0e0546078d626dd0fc251de1dce7b7f"} Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.490104 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5194da0fe241fc1f069a31c37a75a108d0e0546078d626dd0fc251de1dce7b7f" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.490191 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g4wrk" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.491469 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z8kq2" event={"ID":"dd836d54-e3a7-45f2-a3ec-21b73cc38496","Type":"ContainerDied","Data":"d7b039f10abed0a1604652f349404458365f5f46b3cf7d0f66ab7839030976d2"} Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.491486 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b039f10abed0a1604652f349404458365f5f46b3cf7d0f66ab7839030976d2" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.491532 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z8kq2" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.492668 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mtqh6" event={"ID":"4b8250e7-12f5-45b5-a5c3-2fbc770df268","Type":"ContainerDied","Data":"800764e826b0092db2af3fb1bb2bf1bbb5edaa8490420cc79531403eb4539849"} Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.492728 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mtqh6" Oct 08 06:54:12 crc kubenswrapper[4958]: I1008 06:54:12.492752 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800764e826b0092db2af3fb1bb2bf1bbb5edaa8490420cc79531403eb4539849" Oct 08 06:54:13 crc kubenswrapper[4958]: I1008 06:54:13.289713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 06:54:13 crc kubenswrapper[4958]: I1008 06:54:13.292713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 06:54:13 crc kubenswrapper[4958]: I1008 06:54:13.710724 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:13 crc kubenswrapper[4958]: I1008 06:54:13.710773 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:13 crc kubenswrapper[4958]: I1008 06:54:13.815736 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:13 crc kubenswrapper[4958]: I1008 06:54:13.830694 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:14 crc kubenswrapper[4958]: I1008 06:54:14.525800 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:14 crc kubenswrapper[4958]: I1008 06:54:14.526107 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:16 crc kubenswrapper[4958]: I1008 06:54:16.429321 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:16 crc kubenswrapper[4958]: I1008 06:54:16.433674 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.035001 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c226-account-create-92j8q"] Oct 08 06:54:19 crc kubenswrapper[4958]: E1008 06:54:19.036318 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8250e7-12f5-45b5-a5c3-2fbc770df268" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.036353 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8250e7-12f5-45b5-a5c3-2fbc770df268" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: E1008 06:54:19.036398 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd836d54-e3a7-45f2-a3ec-21b73cc38496" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.036417 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd836d54-e3a7-45f2-a3ec-21b73cc38496" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: E1008 06:54:19.036455 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac68cce-b443-4f57-89f7-6c7fee4fcd32" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.036476 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac68cce-b443-4f57-89f7-6c7fee4fcd32" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.036939 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac68cce-b443-4f57-89f7-6c7fee4fcd32" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.037023 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8250e7-12f5-45b5-a5c3-2fbc770df268" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.037056 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd836d54-e3a7-45f2-a3ec-21b73cc38496" containerName="mariadb-database-create" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.038410 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.041833 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.048552 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c226-account-create-92j8q"] Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.067067 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wzs\" (UniqueName: \"kubernetes.io/projected/0698e470-ea79-4a02-8301-bf39a58d8901-kube-api-access-l4wzs\") pod \"nova-api-c226-account-create-92j8q\" (UID: \"0698e470-ea79-4a02-8301-bf39a58d8901\") " pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.168428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4wzs\" (UniqueName: \"kubernetes.io/projected/0698e470-ea79-4a02-8301-bf39a58d8901-kube-api-access-l4wzs\") pod \"nova-api-c226-account-create-92j8q\" (UID: \"0698e470-ea79-4a02-8301-bf39a58d8901\") " pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.196557 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4wzs\" (UniqueName: \"kubernetes.io/projected/0698e470-ea79-4a02-8301-bf39a58d8901-kube-api-access-l4wzs\") pod \"nova-api-c226-account-create-92j8q\" (UID: \"0698e470-ea79-4a02-8301-bf39a58d8901\") " pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.237870 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2e10-account-create-sc4mp"] Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.239276 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.241671 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.250294 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2e10-account-create-sc4mp"] Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.270890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdpld\" (UniqueName: \"kubernetes.io/projected/7db6755b-4975-45c8-aab5-594b40778231-kube-api-access-wdpld\") pod \"nova-cell0-2e10-account-create-sc4mp\" (UID: \"7db6755b-4975-45c8-aab5-594b40778231\") " pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.326510 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fc4a-account-create-ct9gp"] Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.327523 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.330254 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.345064 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fc4a-account-create-ct9gp"] Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.368870 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.372879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdpld\" (UniqueName: \"kubernetes.io/projected/7db6755b-4975-45c8-aab5-594b40778231-kube-api-access-wdpld\") pod \"nova-cell0-2e10-account-create-sc4mp\" (UID: \"7db6755b-4975-45c8-aab5-594b40778231\") " pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.372941 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lq69\" (UniqueName: \"kubernetes.io/projected/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5-kube-api-access-4lq69\") pod \"nova-cell1-fc4a-account-create-ct9gp\" (UID: \"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5\") " pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.408341 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdpld\" (UniqueName: \"kubernetes.io/projected/7db6755b-4975-45c8-aab5-594b40778231-kube-api-access-wdpld\") pod \"nova-cell0-2e10-account-create-sc4mp\" (UID: \"7db6755b-4975-45c8-aab5-594b40778231\") " pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.477501 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lq69\" (UniqueName: \"kubernetes.io/projected/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5-kube-api-access-4lq69\") pod \"nova-cell1-fc4a-account-create-ct9gp\" (UID: \"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5\") " pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.509676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lq69\" (UniqueName: \"kubernetes.io/projected/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5-kube-api-access-4lq69\") pod \"nova-cell1-fc4a-account-create-ct9gp\" (UID: \"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5\") " pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.579186 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.645819 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:19 crc kubenswrapper[4958]: I1008 06:54:19.859855 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c226-account-create-92j8q"] Oct 08 06:54:19 crc kubenswrapper[4958]: W1008 06:54:19.866896 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0698e470_ea79_4a02_8301_bf39a58d8901.slice/crio-6f70f95ec8bd8dcf6341ae2ae9d5452c3f305f23165a6c1c285d4c49acecff5c WatchSource:0}: Error finding container 6f70f95ec8bd8dcf6341ae2ae9d5452c3f305f23165a6c1c285d4c49acecff5c: Status 404 returned error can't find the container with id 6f70f95ec8bd8dcf6341ae2ae9d5452c3f305f23165a6c1c285d4c49acecff5c Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.123021 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2e10-account-create-sc4mp"] Oct 08 06:54:20 crc kubenswrapper[4958]: W1008 06:54:20.140358 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db6755b_4975_45c8_aab5_594b40778231.slice/crio-8be7d38bad1d5a0d200654d1f4d2e3aa5f4b4dc73b985e89e5de365a7a9c50a9 WatchSource:0}: Error finding container 8be7d38bad1d5a0d200654d1f4d2e3aa5f4b4dc73b985e89e5de365a7a9c50a9: Status 404 returned error can't find the container with id 8be7d38bad1d5a0d200654d1f4d2e3aa5f4b4dc73b985e89e5de365a7a9c50a9 Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.196520 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fc4a-account-create-ct9gp"] Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.608085 4958 generic.go:334] "Generic (PLEG): container finished" podID="7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5" containerID="0651019d24af5197e8686a21c7aeaa99d9c4dc914b6474540c6458ac12124456" exitCode=0 Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.608144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" event={"ID":"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5","Type":"ContainerDied","Data":"0651019d24af5197e8686a21c7aeaa99d9c4dc914b6474540c6458ac12124456"} Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.608445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" event={"ID":"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5","Type":"ContainerStarted","Data":"bf827f7786ec73dc98bed389ad8eead4faf7c57cd06ddb05fc513345c0a8e0e4"} Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.612312 4958 generic.go:334] "Generic (PLEG): container finished" podID="7db6755b-4975-45c8-aab5-594b40778231" containerID="a4d7a2c0b6159dc1881b855e89b4f08a8c4529f260286c802640dac3d986e08a" exitCode=0 Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.612471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2e10-account-create-sc4mp" event={"ID":"7db6755b-4975-45c8-aab5-594b40778231","Type":"ContainerDied","Data":"a4d7a2c0b6159dc1881b855e89b4f08a8c4529f260286c802640dac3d986e08a"} Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.612506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2e10-account-create-sc4mp" event={"ID":"7db6755b-4975-45c8-aab5-594b40778231","Type":"ContainerStarted","Data":"8be7d38bad1d5a0d200654d1f4d2e3aa5f4b4dc73b985e89e5de365a7a9c50a9"} Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.621439 4958 generic.go:334] "Generic (PLEG): container finished" podID="0698e470-ea79-4a02-8301-bf39a58d8901" containerID="34ccb660beacfb2c4633a83b17ca0ed0d4cda30750454a7887a809a2ba97e2e1" exitCode=0 Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.621490 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c226-account-create-92j8q" event={"ID":"0698e470-ea79-4a02-8301-bf39a58d8901","Type":"ContainerDied","Data":"34ccb660beacfb2c4633a83b17ca0ed0d4cda30750454a7887a809a2ba97e2e1"} Oct 08 06:54:20 crc kubenswrapper[4958]: I1008 06:54:20.621520 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c226-account-create-92j8q" event={"ID":"0698e470-ea79-4a02-8301-bf39a58d8901","Type":"ContainerStarted","Data":"6f70f95ec8bd8dcf6341ae2ae9d5452c3f305f23165a6c1c285d4c49acecff5c"} Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.152840 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.160606 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.165776 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.243658 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4wzs\" (UniqueName: \"kubernetes.io/projected/0698e470-ea79-4a02-8301-bf39a58d8901-kube-api-access-l4wzs\") pod \"0698e470-ea79-4a02-8301-bf39a58d8901\" (UID: \"0698e470-ea79-4a02-8301-bf39a58d8901\") " Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.250652 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0698e470-ea79-4a02-8301-bf39a58d8901-kube-api-access-l4wzs" (OuterVolumeSpecName: "kube-api-access-l4wzs") pod "0698e470-ea79-4a02-8301-bf39a58d8901" (UID: "0698e470-ea79-4a02-8301-bf39a58d8901"). InnerVolumeSpecName "kube-api-access-l4wzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.345369 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdpld\" (UniqueName: \"kubernetes.io/projected/7db6755b-4975-45c8-aab5-594b40778231-kube-api-access-wdpld\") pod \"7db6755b-4975-45c8-aab5-594b40778231\" (UID: \"7db6755b-4975-45c8-aab5-594b40778231\") " Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.345467 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lq69\" (UniqueName: \"kubernetes.io/projected/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5-kube-api-access-4lq69\") pod \"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5\" (UID: \"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5\") " Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.346217 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4wzs\" (UniqueName: \"kubernetes.io/projected/0698e470-ea79-4a02-8301-bf39a58d8901-kube-api-access-l4wzs\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.350091 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5-kube-api-access-4lq69" (OuterVolumeSpecName: "kube-api-access-4lq69") pod "7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5" (UID: "7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5"). InnerVolumeSpecName "kube-api-access-4lq69". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.354203 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db6755b-4975-45c8-aab5-594b40778231-kube-api-access-wdpld" (OuterVolumeSpecName: "kube-api-access-wdpld") pod "7db6755b-4975-45c8-aab5-594b40778231" (UID: "7db6755b-4975-45c8-aab5-594b40778231"). InnerVolumeSpecName "kube-api-access-wdpld". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.447973 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdpld\" (UniqueName: \"kubernetes.io/projected/7db6755b-4975-45c8-aab5-594b40778231-kube-api-access-wdpld\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.448315 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lq69\" (UniqueName: \"kubernetes.io/projected/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5-kube-api-access-4lq69\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.647753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" event={"ID":"7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5","Type":"ContainerDied","Data":"bf827f7786ec73dc98bed389ad8eead4faf7c57cd06ddb05fc513345c0a8e0e4"} Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.647812 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf827f7786ec73dc98bed389ad8eead4faf7c57cd06ddb05fc513345c0a8e0e4" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.647821 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc4a-account-create-ct9gp" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.650874 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2e10-account-create-sc4mp" event={"ID":"7db6755b-4975-45c8-aab5-594b40778231","Type":"ContainerDied","Data":"8be7d38bad1d5a0d200654d1f4d2e3aa5f4b4dc73b985e89e5de365a7a9c50a9"} Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.650983 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be7d38bad1d5a0d200654d1f4d2e3aa5f4b4dc73b985e89e5de365a7a9c50a9" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.650914 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2e10-account-create-sc4mp" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.654112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c226-account-create-92j8q" event={"ID":"0698e470-ea79-4a02-8301-bf39a58d8901","Type":"ContainerDied","Data":"6f70f95ec8bd8dcf6341ae2ae9d5452c3f305f23165a6c1c285d4c49acecff5c"} Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.654165 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f70f95ec8bd8dcf6341ae2ae9d5452c3f305f23165a6c1c285d4c49acecff5c" Oct 08 06:54:22 crc kubenswrapper[4958]: I1008 06:54:22.654247 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c226-account-create-92j8q" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.640942 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jbvrr"] Oct 08 06:54:24 crc kubenswrapper[4958]: E1008 06:54:24.641537 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db6755b-4975-45c8-aab5-594b40778231" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.641549 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db6755b-4975-45c8-aab5-594b40778231" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: E1008 06:54:24.641566 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.641572 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: E1008 06:54:24.641586 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698e470-ea79-4a02-8301-bf39a58d8901" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.641593 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698e470-ea79-4a02-8301-bf39a58d8901" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.641763 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db6755b-4975-45c8-aab5-594b40778231" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.641775 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.641792 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0698e470-ea79-4a02-8301-bf39a58d8901" containerName="mariadb-account-create" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.642364 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.647867 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.647904 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7m9r9" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.648552 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.655374 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jbvrr"] Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.704662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-scripts\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.704929 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kx28\" (UniqueName: \"kubernetes.io/projected/a3c7170a-4267-4318-9772-c68efe1cb7e4-kube-api-access-6kx28\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.705033 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-config-data\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.705134 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.807353 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kx28\" (UniqueName: \"kubernetes.io/projected/a3c7170a-4267-4318-9772-c68efe1cb7e4-kube-api-access-6kx28\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.807438 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-config-data\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.807510 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.807588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-scripts\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.813244 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-scripts\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.814245 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-config-data\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.819662 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:24 crc kubenswrapper[4958]: I1008 06:54:24.828861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kx28\" (UniqueName: \"kubernetes.io/projected/a3c7170a-4267-4318-9772-c68efe1cb7e4-kube-api-access-6kx28\") pod \"nova-cell0-conductor-db-sync-jbvrr\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:25 crc kubenswrapper[4958]: I1008 06:54:25.006171 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:25 crc kubenswrapper[4958]: I1008 06:54:25.368475 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jbvrr"] Oct 08 06:54:25 crc kubenswrapper[4958]: I1008 06:54:25.725629 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" event={"ID":"a3c7170a-4267-4318-9772-c68efe1cb7e4","Type":"ContainerStarted","Data":"2b5f16e77717fcb15c7f5e9cad2ed30aa79a2ee51f847b319a11c9c9b2211b5d"} Oct 08 06:54:32 crc kubenswrapper[4958]: I1008 06:54:32.797437 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" event={"ID":"a3c7170a-4267-4318-9772-c68efe1cb7e4","Type":"ContainerStarted","Data":"b1552d7b8117bc2448cb0631d84ac39a24f319291ca1fbed35d8d9eb21a01721"} Oct 08 06:54:32 crc kubenswrapper[4958]: I1008 06:54:32.833588 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" podStartSLOduration=1.8774160659999999 podStartE2EDuration="8.833563875s" podCreationTimestamp="2025-10-08 06:54:24 +0000 UTC" firstStartedPulling="2025-10-08 06:54:25.369601651 +0000 UTC m=+1208.499294252" lastFinishedPulling="2025-10-08 06:54:32.32574946 +0000 UTC m=+1215.455442061" observedRunningTime="2025-10-08 06:54:32.823672828 +0000 UTC m=+1215.953365489" watchObservedRunningTime="2025-10-08 06:54:32.833563875 +0000 UTC m=+1215.963256516" Oct 08 06:54:34 crc kubenswrapper[4958]: I1008 06:54:34.784296 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 06:54:36 crc kubenswrapper[4958]: I1008 06:54:36.844886 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:54:36 crc kubenswrapper[4958]: I1008 06:54:36.846537 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:54:38 crc kubenswrapper[4958]: I1008 06:54:38.966941 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:54:38 crc kubenswrapper[4958]: I1008 06:54:38.967221 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="73dcc7e5-2f70-403a-abe2-2c67260864eb" containerName="kube-state-metrics" containerID="cri-o://e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e" gracePeriod=30 Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.493880 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.535142 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk922\" (UniqueName: \"kubernetes.io/projected/73dcc7e5-2f70-403a-abe2-2c67260864eb-kube-api-access-zk922\") pod \"73dcc7e5-2f70-403a-abe2-2c67260864eb\" (UID: \"73dcc7e5-2f70-403a-abe2-2c67260864eb\") " Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.542229 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dcc7e5-2f70-403a-abe2-2c67260864eb-kube-api-access-zk922" (OuterVolumeSpecName: "kube-api-access-zk922") pod "73dcc7e5-2f70-403a-abe2-2c67260864eb" (UID: "73dcc7e5-2f70-403a-abe2-2c67260864eb"). InnerVolumeSpecName "kube-api-access-zk922". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.637802 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk922\" (UniqueName: \"kubernetes.io/projected/73dcc7e5-2f70-403a-abe2-2c67260864eb-kube-api-access-zk922\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.880775 4958 generic.go:334] "Generic (PLEG): container finished" podID="73dcc7e5-2f70-403a-abe2-2c67260864eb" containerID="e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e" exitCode=2 Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.880827 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73dcc7e5-2f70-403a-abe2-2c67260864eb","Type":"ContainerDied","Data":"e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e"} Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.880843 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.880876 4958 scope.go:117] "RemoveContainer" containerID="e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.880860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"73dcc7e5-2f70-403a-abe2-2c67260864eb","Type":"ContainerDied","Data":"49f6535634718ec7220f6d832efe30f9b0dfe3e685ea662ca725a467af190cf7"} Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.902159 4958 scope.go:117] "RemoveContainer" containerID="e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e" Oct 08 06:54:39 crc kubenswrapper[4958]: E1008 06:54:39.902623 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e\": container with ID starting with e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e not found: ID does not exist" containerID="e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.902666 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e"} err="failed to get container status \"e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e\": rpc error: code = NotFound desc = could not find container \"e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e\": container with ID starting with e9c729c01cef628808e83d5607e3201dd2ae2ef6f5d07eb978900ceb9802752e not found: ID does not exist" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.911897 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.924799 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.938126 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:54:39 crc kubenswrapper[4958]: E1008 06:54:39.938729 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dcc7e5-2f70-403a-abe2-2c67260864eb" containerName="kube-state-metrics" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.938759 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dcc7e5-2f70-403a-abe2-2c67260864eb" containerName="kube-state-metrics" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.939140 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dcc7e5-2f70-403a-abe2-2c67260864eb" containerName="kube-state-metrics" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.940085 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.943246 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.946466 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:54:39 crc kubenswrapper[4958]: I1008 06:54:39.956290 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.048910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj2h6\" (UniqueName: \"kubernetes.io/projected/4303db42-6842-4d26-bf89-79755d0db57d-kube-api-access-zj2h6\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.049013 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.049105 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.049132 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.150242 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.150298 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.150455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj2h6\" (UniqueName: \"kubernetes.io/projected/4303db42-6842-4d26-bf89-79755d0db57d-kube-api-access-zj2h6\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.150494 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.157642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.157651 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.160161 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.181335 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj2h6\" (UniqueName: \"kubernetes.io/projected/4303db42-6842-4d26-bf89-79755d0db57d-kube-api-access-zj2h6\") pod \"kube-state-metrics-0\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.259532 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:54:40 crc kubenswrapper[4958]: W1008 06:54:40.779988 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4303db42_6842_4d26_bf89_79755d0db57d.slice/crio-878ba928bfeb9ac924c7cb4ec028ff8adf612df4c52b8bad0a1048afd8a3f0fc WatchSource:0}: Error finding container 878ba928bfeb9ac924c7cb4ec028ff8adf612df4c52b8bad0a1048afd8a3f0fc: Status 404 returned error can't find the container with id 878ba928bfeb9ac924c7cb4ec028ff8adf612df4c52b8bad0a1048afd8a3f0fc Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.780691 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.894255 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4303db42-6842-4d26-bf89-79755d0db57d","Type":"ContainerStarted","Data":"878ba928bfeb9ac924c7cb4ec028ff8adf612df4c52b8bad0a1048afd8a3f0fc"} Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.972096 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.972446 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-central-agent" containerID="cri-o://f7d3a2e24e36cb843175ea076666289a75ce06294357e1d7f9f67d7974e2b8a2" gracePeriod=30 Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.972567 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-notification-agent" containerID="cri-o://3ad1bac1f4b3c727bc4312c1f9a444c3a8f983b58148ca3fc6e195b99513859a" gracePeriod=30 Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.972880 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="proxy-httpd" containerID="cri-o://85772ea9d58e1f9499c1692b5187586c59aae8867c70dc4043d9d818fc5d27c5" gracePeriod=30 Oct 08 06:54:40 crc kubenswrapper[4958]: I1008 06:54:40.974899 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="sg-core" containerID="cri-o://9fb3d91b7b2f5366677ef6e3ddcbd73638676fa5edbe579060028418b61d5a79" gracePeriod=30 Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.588236 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73dcc7e5-2f70-403a-abe2-2c67260864eb" path="/var/lib/kubelet/pods/73dcc7e5-2f70-403a-abe2-2c67260864eb/volumes" Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.908875 4958 generic.go:334] "Generic (PLEG): container finished" podID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerID="85772ea9d58e1f9499c1692b5187586c59aae8867c70dc4043d9d818fc5d27c5" exitCode=0 Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.908925 4958 generic.go:334] "Generic (PLEG): container finished" podID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerID="9fb3d91b7b2f5366677ef6e3ddcbd73638676fa5edbe579060028418b61d5a79" exitCode=2 Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.908933 4958 generic.go:334] "Generic (PLEG): container finished" podID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerID="f7d3a2e24e36cb843175ea076666289a75ce06294357e1d7f9f67d7974e2b8a2" exitCode=0 Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.908988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerDied","Data":"85772ea9d58e1f9499c1692b5187586c59aae8867c70dc4043d9d818fc5d27c5"} Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.909042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerDied","Data":"9fb3d91b7b2f5366677ef6e3ddcbd73638676fa5edbe579060028418b61d5a79"} Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.909063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerDied","Data":"f7d3a2e24e36cb843175ea076666289a75ce06294357e1d7f9f67d7974e2b8a2"} Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.910846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4303db42-6842-4d26-bf89-79755d0db57d","Type":"ContainerStarted","Data":"cc400959116b34b9a987bb5b45ad7f715a7f6d889a0bb90cd853ee49a2f5a81c"} Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.911164 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 06:54:41 crc kubenswrapper[4958]: I1008 06:54:41.934393 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.575618976 podStartE2EDuration="2.934374658s" podCreationTimestamp="2025-10-08 06:54:39 +0000 UTC" firstStartedPulling="2025-10-08 06:54:40.784930125 +0000 UTC m=+1223.914622726" lastFinishedPulling="2025-10-08 06:54:41.143685807 +0000 UTC m=+1224.273378408" observedRunningTime="2025-10-08 06:54:41.929485456 +0000 UTC m=+1225.059178127" watchObservedRunningTime="2025-10-08 06:54:41.934374658 +0000 UTC m=+1225.064067259" Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.921980 4958 generic.go:334] "Generic (PLEG): container finished" podID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerID="3ad1bac1f4b3c727bc4312c1f9a444c3a8f983b58148ca3fc6e195b99513859a" exitCode=0 Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.922043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerDied","Data":"3ad1bac1f4b3c727bc4312c1f9a444c3a8f983b58148ca3fc6e195b99513859a"} Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.922318 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cc94ec2-44b7-461b-bcea-691ea7b00ee5","Type":"ContainerDied","Data":"da3bdea763e5c61d2c0ba4c15361840608f8e592b0f94f15e8ea4490b1ea32d9"} Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.922333 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3bdea763e5c61d2c0ba4c15361840608f8e592b0f94f15e8ea4490b1ea32d9" Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.930081 4958 generic.go:334] "Generic (PLEG): container finished" podID="a3c7170a-4267-4318-9772-c68efe1cb7e4" containerID="b1552d7b8117bc2448cb0631d84ac39a24f319291ca1fbed35d8d9eb21a01721" exitCode=0 Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.930352 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" event={"ID":"a3c7170a-4267-4318-9772-c68efe1cb7e4","Type":"ContainerDied","Data":"b1552d7b8117bc2448cb0631d84ac39a24f319291ca1fbed35d8d9eb21a01721"} Oct 08 06:54:42 crc kubenswrapper[4958]: I1008 06:54:42.940563 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005047 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-scripts\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-log-httpd\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005268 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-sg-core-conf-yaml\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-config-data\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-run-httpd\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.005441 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkwk\" (UniqueName: \"kubernetes.io/projected/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-kube-api-access-dbkwk\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.012722 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-scripts" (OuterVolumeSpecName: "scripts") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.013177 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.013682 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.016165 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-kube-api-access-dbkwk" (OuterVolumeSpecName: "kube-api-access-dbkwk") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "kube-api-access-dbkwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.064597 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.106634 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.106934 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle\") pod \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\" (UID: \"5cc94ec2-44b7-461b-bcea-691ea7b00ee5\") " Oct 08 06:54:43 crc kubenswrapper[4958]: W1008 06:54:43.107120 4958 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5cc94ec2-44b7-461b-bcea-691ea7b00ee5/volumes/kubernetes.io~secret/combined-ca-bundle Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.107190 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.107713 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.107782 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.107844 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.107902 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.107990 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.108063 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkwk\" (UniqueName: \"kubernetes.io/projected/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-kube-api-access-dbkwk\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.123424 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-config-data" (OuterVolumeSpecName: "config-data") pod "5cc94ec2-44b7-461b-bcea-691ea7b00ee5" (UID: "5cc94ec2-44b7-461b-bcea-691ea7b00ee5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.210092 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc94ec2-44b7-461b-bcea-691ea7b00ee5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.943062 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.978463 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:43 crc kubenswrapper[4958]: I1008 06:54:43.986647 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.007757 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:44 crc kubenswrapper[4958]: E1008 06:54:44.008148 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-notification-agent" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008162 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-notification-agent" Oct 08 06:54:44 crc kubenswrapper[4958]: E1008 06:54:44.008177 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="proxy-httpd" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008183 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="proxy-httpd" Oct 08 06:54:44 crc kubenswrapper[4958]: E1008 06:54:44.008196 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="sg-core" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008202 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="sg-core" Oct 08 06:54:44 crc kubenswrapper[4958]: E1008 06:54:44.008237 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-central-agent" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008243 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-central-agent" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008407 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="sg-core" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008423 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-central-agent" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008432 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="ceilometer-notification-agent" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.008447 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" containerName="proxy-httpd" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.009974 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.014209 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.014592 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.014828 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.035273 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.126604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.126668 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-run-httpd\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.127066 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-scripts\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.127192 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrrcj\" (UniqueName: \"kubernetes.io/projected/c8d3300d-5acc-450e-8820-219933eeac53-kube-api-access-lrrcj\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.127365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-log-httpd\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.127443 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-config-data\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.127603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.127677 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229129 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-log-httpd\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-config-data\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229263 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229320 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229346 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-run-httpd\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-scripts\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.229455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrrcj\" (UniqueName: \"kubernetes.io/projected/c8d3300d-5acc-450e-8820-219933eeac53-kube-api-access-lrrcj\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.232874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-run-httpd\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.233301 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-log-httpd\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.234383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.235614 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.236924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.237263 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-scripts\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.237569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-config-data\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.265225 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrrcj\" (UniqueName: \"kubernetes.io/projected/c8d3300d-5acc-450e-8820-219933eeac53-kube-api-access-lrrcj\") pod \"ceilometer-0\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.330805 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.448735 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.539474 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-scripts\") pod \"a3c7170a-4267-4318-9772-c68efe1cb7e4\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.539628 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kx28\" (UniqueName: \"kubernetes.io/projected/a3c7170a-4267-4318-9772-c68efe1cb7e4-kube-api-access-6kx28\") pod \"a3c7170a-4267-4318-9772-c68efe1cb7e4\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.539663 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-config-data\") pod \"a3c7170a-4267-4318-9772-c68efe1cb7e4\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.539717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-combined-ca-bundle\") pod \"a3c7170a-4267-4318-9772-c68efe1cb7e4\" (UID: \"a3c7170a-4267-4318-9772-c68efe1cb7e4\") " Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.544624 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-scripts" (OuterVolumeSpecName: "scripts") pod "a3c7170a-4267-4318-9772-c68efe1cb7e4" (UID: "a3c7170a-4267-4318-9772-c68efe1cb7e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.545342 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c7170a-4267-4318-9772-c68efe1cb7e4-kube-api-access-6kx28" (OuterVolumeSpecName: "kube-api-access-6kx28") pod "a3c7170a-4267-4318-9772-c68efe1cb7e4" (UID: "a3c7170a-4267-4318-9772-c68efe1cb7e4"). InnerVolumeSpecName "kube-api-access-6kx28". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.568338 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-config-data" (OuterVolumeSpecName: "config-data") pod "a3c7170a-4267-4318-9772-c68efe1cb7e4" (UID: "a3c7170a-4267-4318-9772-c68efe1cb7e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.580370 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3c7170a-4267-4318-9772-c68efe1cb7e4" (UID: "a3c7170a-4267-4318-9772-c68efe1cb7e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.641529 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.641555 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kx28\" (UniqueName: \"kubernetes.io/projected/a3c7170a-4267-4318-9772-c68efe1cb7e4-kube-api-access-6kx28\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.641565 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.641577 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c7170a-4267-4318-9772-c68efe1cb7e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.816217 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:54:44 crc kubenswrapper[4958]: W1008 06:54:44.818855 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8d3300d_5acc_450e_8820_219933eeac53.slice/crio-5f5599584dc6386e3b09d96d641fec32d5b17f64ca818ec1c831978579f079c9 WatchSource:0}: Error finding container 5f5599584dc6386e3b09d96d641fec32d5b17f64ca818ec1c831978579f079c9: Status 404 returned error can't find the container with id 5f5599584dc6386e3b09d96d641fec32d5b17f64ca818ec1c831978579f079c9 Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.954085 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.954117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jbvrr" event={"ID":"a3c7170a-4267-4318-9772-c68efe1cb7e4","Type":"ContainerDied","Data":"2b5f16e77717fcb15c7f5e9cad2ed30aa79a2ee51f847b319a11c9c9b2211b5d"} Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.954170 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b5f16e77717fcb15c7f5e9cad2ed30aa79a2ee51f847b319a11c9c9b2211b5d" Oct 08 06:54:44 crc kubenswrapper[4958]: I1008 06:54:44.955869 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerStarted","Data":"5f5599584dc6386e3b09d96d641fec32d5b17f64ca818ec1c831978579f079c9"} Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.074822 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 06:54:45 crc kubenswrapper[4958]: E1008 06:54:45.075642 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c7170a-4267-4318-9772-c68efe1cb7e4" containerName="nova-cell0-conductor-db-sync" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.075673 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c7170a-4267-4318-9772-c68efe1cb7e4" containerName="nova-cell0-conductor-db-sync" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.075930 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c7170a-4267-4318-9772-c68efe1cb7e4" containerName="nova-cell0-conductor-db-sync" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.076730 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.079251 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.079467 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7m9r9" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.086695 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.149266 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.149342 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgzl\" (UniqueName: \"kubernetes.io/projected/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-kube-api-access-wpgzl\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.149453 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.251007 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpgzl\" (UniqueName: \"kubernetes.io/projected/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-kube-api-access-wpgzl\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.251189 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.251226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.255163 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.255342 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.286892 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpgzl\" (UniqueName: \"kubernetes.io/projected/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-kube-api-access-wpgzl\") pod \"nova-cell0-conductor-0\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.427221 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.591178 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cc94ec2-44b7-461b-bcea-691ea7b00ee5" path="/var/lib/kubelet/pods/5cc94ec2-44b7-461b-bcea-691ea7b00ee5/volumes" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.706522 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 06:54:45 crc kubenswrapper[4958]: W1008 06:54:45.717008 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9b1a97b_b0eb_4283_bcf6_7ec3f45b6c5f.slice/crio-82a5d1d59b692c43b87203d0cc9e50be05a7fc2c2d4585fbf3eada0e481252c7 WatchSource:0}: Error finding container 82a5d1d59b692c43b87203d0cc9e50be05a7fc2c2d4585fbf3eada0e481252c7: Status 404 returned error can't find the container with id 82a5d1d59b692c43b87203d0cc9e50be05a7fc2c2d4585fbf3eada0e481252c7 Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.969244 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f","Type":"ContainerStarted","Data":"bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0"} Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.969574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f","Type":"ContainerStarted","Data":"82a5d1d59b692c43b87203d0cc9e50be05a7fc2c2d4585fbf3eada0e481252c7"} Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.969777 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:45 crc kubenswrapper[4958]: I1008 06:54:45.971570 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerStarted","Data":"8da3c05f804e73d89736d57dce94c00953045d8d97c67b3648b3714dada8f2d9"} Oct 08 06:54:46 crc kubenswrapper[4958]: I1008 06:54:46.990656 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerStarted","Data":"f2b5c325174ca55a506e0030b61af4ddc1024765cfa6d2acbeb2242623a1ed64"} Oct 08 06:54:47 crc kubenswrapper[4958]: I1008 06:54:47.639661 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.639638992 podStartE2EDuration="2.639638992s" podCreationTimestamp="2025-10-08 06:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:54:46.000711354 +0000 UTC m=+1229.130404015" watchObservedRunningTime="2025-10-08 06:54:47.639638992 +0000 UTC m=+1230.769331603" Oct 08 06:54:49 crc kubenswrapper[4958]: I1008 06:54:49.030442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerStarted","Data":"d4bb3873a7309661f2e990983b00bab9efc693a2acac87ec5fd0ffda963953d2"} Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.048429 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerStarted","Data":"0e19d06b3e926c69b54e43c11dcfc7de9d7a5f8624c873e00af289abda321d5e"} Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.049446 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.100291 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.629993169 podStartE2EDuration="7.100264076s" podCreationTimestamp="2025-10-08 06:54:43 +0000 UTC" firstStartedPulling="2025-10-08 06:54:44.821364054 +0000 UTC m=+1227.951056645" lastFinishedPulling="2025-10-08 06:54:49.291634931 +0000 UTC m=+1232.421327552" observedRunningTime="2025-10-08 06:54:50.082217 +0000 UTC m=+1233.211909691" watchObservedRunningTime="2025-10-08 06:54:50.100264076 +0000 UTC m=+1233.229956717" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.273292 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.476559 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.960103 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-k6wwq"] Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.961394 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.964657 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.964983 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 08 06:54:50 crc kubenswrapper[4958]: I1008 06:54:50.981219 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k6wwq"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.065019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-config-data\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.065145 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-scripts\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.065197 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qplxz\" (UniqueName: \"kubernetes.io/projected/bc06996f-e37d-472a-8912-683dbc0049a5-kube-api-access-qplxz\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.065243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.121495 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.122573 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.129415 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.139168 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.175934 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.175989 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-config-data\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.176071 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-config-data\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.176227 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lmlc\" (UniqueName: \"kubernetes.io/projected/002517c1-bdf3-4f0d-b525-735b3fccd163-kube-api-access-6lmlc\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.176289 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-scripts\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.176385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qplxz\" (UniqueName: \"kubernetes.io/projected/bc06996f-e37d-472a-8912-683dbc0049a5-kube-api-access-qplxz\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.176532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.188153 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-config-data\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.194179 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.194575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-scripts\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.219628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qplxz\" (UniqueName: \"kubernetes.io/projected/bc06996f-e37d-472a-8912-683dbc0049a5-kube-api-access-qplxz\") pod \"nova-cell0-cell-mapping-k6wwq\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.252372 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.253988 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.284701 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.285056 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.292348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.292398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-config-data\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.292549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lmlc\" (UniqueName: \"kubernetes.io/projected/002517c1-bdf3-4f0d-b525-735b3fccd163-kube-api-access-6lmlc\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.304449 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-config-data\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.323406 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.334310 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lmlc\" (UniqueName: \"kubernetes.io/projected/002517c1-bdf3-4f0d-b525-735b3fccd163-kube-api-access-6lmlc\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.346149 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.346148 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.347696 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.350364 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.395318 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-config-data\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396351 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-logs\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396376 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0597eccc-ad38-4d42-900c-4bf0c61984c6-logs\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396408 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396446 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4wx\" (UniqueName: \"kubernetes.io/projected/0597eccc-ad38-4d42-900c-4bf0c61984c6-kube-api-access-vb4wx\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396468 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5hw\" (UniqueName: \"kubernetes.io/projected/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-kube-api-access-7p5hw\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396498 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.396519 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-config-data\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.412293 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-bbptg"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.415621 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.461075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.491220 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-bbptg"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.497862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.497925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-config\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.497979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-config-data\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498004 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-logs\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498026 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0597eccc-ad38-4d42-900c-4bf0c61984c6-logs\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498054 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498085 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4wx\" (UniqueName: \"kubernetes.io/projected/0597eccc-ad38-4d42-900c-4bf0c61984c6-kube-api-access-vb4wx\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498105 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5hw\" (UniqueName: \"kubernetes.io/projected/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-kube-api-access-7p5hw\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498136 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498154 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-config-data\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498183 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dp5\" (UniqueName: \"kubernetes.io/projected/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-kube-api-access-w2dp5\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.498250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.502674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0597eccc-ad38-4d42-900c-4bf0c61984c6-logs\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.503066 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-logs\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.509785 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.514546 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-config-data\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.514626 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.525009 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-config-data\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.528706 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5hw\" (UniqueName: \"kubernetes.io/projected/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-kube-api-access-7p5hw\") pod \"nova-metadata-0\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.555020 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4wx\" (UniqueName: \"kubernetes.io/projected/0597eccc-ad38-4d42-900c-4bf0c61984c6-kube-api-access-vb4wx\") pod \"nova-api-0\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.564215 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.566082 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.581787 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600021 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600074 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600112 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600139 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dp5\" (UniqueName: \"kubernetes.io/projected/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-kube-api-access-w2dp5\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600157 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600177 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600192 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrvj\" (UniqueName: \"kubernetes.io/projected/1d377988-7ac8-4f84-adbf-28eab7e9128e-kube-api-access-hsrvj\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.600251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-config\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.601475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-svc\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.602018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-swift-storage-0\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.602579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.604253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.608549 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-config\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.624586 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.625617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dp5\" (UniqueName: \"kubernetes.io/projected/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-kube-api-access-w2dp5\") pod \"dnsmasq-dns-6ffc974fdf-bbptg\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.701486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.701715 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.701765 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrvj\" (UniqueName: \"kubernetes.io/projected/1d377988-7ac8-4f84-adbf-28eab7e9128e-kube-api-access-hsrvj\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.706681 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.715018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.718888 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrvj\" (UniqueName: \"kubernetes.io/projected/1d377988-7ac8-4f84-adbf-28eab7e9128e-kube-api-access-hsrvj\") pod \"nova-cell1-novncproxy-0\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.750704 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.801846 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.838404 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:51 crc kubenswrapper[4958]: I1008 06:54:51.942366 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.045822 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k6wwq"] Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.117368 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k6wwq" event={"ID":"bc06996f-e37d-472a-8912-683dbc0049a5","Type":"ContainerStarted","Data":"461d2ad4054c4aef881fc3c85e447cd1efee340fb65e399369d7b583589a79b9"} Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.158611 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.370387 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.438160 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qs945"] Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.439429 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.444081 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.444465 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.457450 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qs945"] Oct 08 06:54:52 crc kubenswrapper[4958]: W1008 06:54:52.465177 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd63ed7_27dd_4fea_aa37_c2d70cc4849e.slice/crio-226402ea8cd755790f6bb7cd7e8ea5d2f61a3f930f91b1ec39b8810b4d7cfc62 WatchSource:0}: Error finding container 226402ea8cd755790f6bb7cd7e8ea5d2f61a3f930f91b1ec39b8810b4d7cfc62: Status 404 returned error can't find the container with id 226402ea8cd755790f6bb7cd7e8ea5d2f61a3f930f91b1ec39b8810b4d7cfc62 Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.473078 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-bbptg"] Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.521003 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-config-data\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.521043 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-scripts\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.521081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrr7\" (UniqueName: \"kubernetes.io/projected/25549551-2ec2-4cf2-800a-b3da40ce78f0-kube-api-access-gbrr7\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.521104 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.551095 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.570821 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:54:52 crc kubenswrapper[4958]: W1008 06:54:52.586807 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d377988_7ac8_4f84_adbf_28eab7e9128e.slice/crio-3338abfa60ab474b4e945af8c07b688859483db61fec23f5dd57308d23a581b1 WatchSource:0}: Error finding container 3338abfa60ab474b4e945af8c07b688859483db61fec23f5dd57308d23a581b1: Status 404 returned error can't find the container with id 3338abfa60ab474b4e945af8c07b688859483db61fec23f5dd57308d23a581b1 Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.622576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-config-data\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.622621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-scripts\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.622674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbrr7\" (UniqueName: \"kubernetes.io/projected/25549551-2ec2-4cf2-800a-b3da40ce78f0-kube-api-access-gbrr7\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.622704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.627065 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-scripts\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.627184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-config-data\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.627377 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.641502 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbrr7\" (UniqueName: \"kubernetes.io/projected/25549551-2ec2-4cf2-800a-b3da40ce78f0-kube-api-access-gbrr7\") pod \"nova-cell1-conductor-db-sync-qs945\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:52 crc kubenswrapper[4958]: I1008 06:54:52.874439 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.132507 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0597eccc-ad38-4d42-900c-4bf0c61984c6","Type":"ContainerStarted","Data":"23c5f5ac06891185f223e9aae9516210a554c504aeafb932c003336ea3d8c662"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.150181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7431c5d0-34c1-45ee-91c1-ae8f45a37fde","Type":"ContainerStarted","Data":"55dd44705338ef97f84f7948c1d5a2b4506ac899a14ff82e0a4786fdf9973b16"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.155045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"002517c1-bdf3-4f0d-b525-735b3fccd163","Type":"ContainerStarted","Data":"69fac8fd878c6cd76e75b692c60f654a8337d19ec09ebafa99af205f67e8947e"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.157364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k6wwq" event={"ID":"bc06996f-e37d-472a-8912-683dbc0049a5","Type":"ContainerStarted","Data":"49b47029744e00f851d977897200407e71e2e9e3b92956a658ccc8cd97f69ac5"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.168671 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1d377988-7ac8-4f84-adbf-28eab7e9128e","Type":"ContainerStarted","Data":"3338abfa60ab474b4e945af8c07b688859483db61fec23f5dd57308d23a581b1"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.175226 4958 generic.go:334] "Generic (PLEG): container finished" podID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerID="7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595" exitCode=0 Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.175266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" event={"ID":"afd63ed7-27dd-4fea-aa37-c2d70cc4849e","Type":"ContainerDied","Data":"7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.175288 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" event={"ID":"afd63ed7-27dd-4fea-aa37-c2d70cc4849e","Type":"ContainerStarted","Data":"226402ea8cd755790f6bb7cd7e8ea5d2f61a3f930f91b1ec39b8810b4d7cfc62"} Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.187355 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-k6wwq" podStartSLOduration=3.187333711 podStartE2EDuration="3.187333711s" podCreationTimestamp="2025-10-08 06:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:54:53.173499008 +0000 UTC m=+1236.303191609" watchObservedRunningTime="2025-10-08 06:54:53.187333711 +0000 UTC m=+1236.317026312" Oct 08 06:54:53 crc kubenswrapper[4958]: I1008 06:54:53.364968 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qs945"] Oct 08 06:54:54 crc kubenswrapper[4958]: I1008 06:54:54.191626 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qs945" event={"ID":"25549551-2ec2-4cf2-800a-b3da40ce78f0","Type":"ContainerStarted","Data":"6666c45dd7dc4715d5ebc25e32f2ae319ed58010b5920cf78661acf46c71541b"} Oct 08 06:54:54 crc kubenswrapper[4958]: I1008 06:54:54.192023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qs945" event={"ID":"25549551-2ec2-4cf2-800a-b3da40ce78f0","Type":"ContainerStarted","Data":"e462cc5692ecf0c4b3be54d52145af91eb1b28374af35cf70198756d9f4b3bbd"} Oct 08 06:54:54 crc kubenswrapper[4958]: I1008 06:54:54.205558 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" event={"ID":"afd63ed7-27dd-4fea-aa37-c2d70cc4849e","Type":"ContainerStarted","Data":"7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f"} Oct 08 06:54:54 crc kubenswrapper[4958]: I1008 06:54:54.205665 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:54:54 crc kubenswrapper[4958]: I1008 06:54:54.208779 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qs945" podStartSLOduration=2.208768778 podStartE2EDuration="2.208768778s" podCreationTimestamp="2025-10-08 06:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:54:54.207964256 +0000 UTC m=+1237.337656857" watchObservedRunningTime="2025-10-08 06:54:54.208768778 +0000 UTC m=+1237.338461379" Oct 08 06:54:54 crc kubenswrapper[4958]: I1008 06:54:54.227204 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" podStartSLOduration=3.227186225 podStartE2EDuration="3.227186225s" podCreationTimestamp="2025-10-08 06:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:54:54.225324535 +0000 UTC m=+1237.355017136" watchObservedRunningTime="2025-10-08 06:54:54.227186225 +0000 UTC m=+1237.356878816" Oct 08 06:54:55 crc kubenswrapper[4958]: I1008 06:54:55.171878 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:54:55 crc kubenswrapper[4958]: I1008 06:54:55.182408 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.228053 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0597eccc-ad38-4d42-900c-4bf0c61984c6","Type":"ContainerStarted","Data":"390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8"} Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.228400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0597eccc-ad38-4d42-900c-4bf0c61984c6","Type":"ContainerStarted","Data":"9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a"} Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.235707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7431c5d0-34c1-45ee-91c1-ae8f45a37fde","Type":"ContainerStarted","Data":"132b0abab668b947f400c332cb052c545883a9e927247a27f852eb2e60d4b740"} Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.235736 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7431c5d0-34c1-45ee-91c1-ae8f45a37fde","Type":"ContainerStarted","Data":"191608b487a17c960e5a5b5b07b05f5f89221db4f9e4d6a623962020f3ac23a1"} Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.235891 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-log" containerID="cri-o://191608b487a17c960e5a5b5b07b05f5f89221db4f9e4d6a623962020f3ac23a1" gracePeriod=30 Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.236166 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-metadata" containerID="cri-o://132b0abab668b947f400c332cb052c545883a9e927247a27f852eb2e60d4b740" gracePeriod=30 Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.239514 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"002517c1-bdf3-4f0d-b525-735b3fccd163","Type":"ContainerStarted","Data":"d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b"} Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.242292 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1d377988-7ac8-4f84-adbf-28eab7e9128e","Type":"ContainerStarted","Data":"f90e9a7fc7a74d3104410e62dd42776120239fbd58e43d11e67fb278dc36af7a"} Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.242766 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1d377988-7ac8-4f84-adbf-28eab7e9128e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f90e9a7fc7a74d3104410e62dd42776120239fbd58e43d11e67fb278dc36af7a" gracePeriod=30 Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.261771 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.477570502 podStartE2EDuration="5.261752084s" podCreationTimestamp="2025-10-08 06:54:51 +0000 UTC" firstStartedPulling="2025-10-08 06:54:52.577135599 +0000 UTC m=+1235.706828200" lastFinishedPulling="2025-10-08 06:54:55.361317141 +0000 UTC m=+1238.491009782" observedRunningTime="2025-10-08 06:54:56.254612261 +0000 UTC m=+1239.384304872" watchObservedRunningTime="2025-10-08 06:54:56.261752084 +0000 UTC m=+1239.391444685" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.274562 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.502511114 podStartE2EDuration="5.274543519s" podCreationTimestamp="2025-10-08 06:54:51 +0000 UTC" firstStartedPulling="2025-10-08 06:54:52.588413673 +0000 UTC m=+1235.718106274" lastFinishedPulling="2025-10-08 06:54:55.360446068 +0000 UTC m=+1238.490138679" observedRunningTime="2025-10-08 06:54:56.271785265 +0000 UTC m=+1239.401477856" watchObservedRunningTime="2025-10-08 06:54:56.274543519 +0000 UTC m=+1239.404236110" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.289830 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.115749609 podStartE2EDuration="5.289814011s" podCreationTimestamp="2025-10-08 06:54:51 +0000 UTC" firstStartedPulling="2025-10-08 06:54:52.179404468 +0000 UTC m=+1235.309097069" lastFinishedPulling="2025-10-08 06:54:55.35346883 +0000 UTC m=+1238.483161471" observedRunningTime="2025-10-08 06:54:56.282394991 +0000 UTC m=+1239.412087592" watchObservedRunningTime="2025-10-08 06:54:56.289814011 +0000 UTC m=+1239.419506612" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.302071 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.324938413 podStartE2EDuration="5.302052451s" podCreationTimestamp="2025-10-08 06:54:51 +0000 UTC" firstStartedPulling="2025-10-08 06:54:52.378325295 +0000 UTC m=+1235.508017906" lastFinishedPulling="2025-10-08 06:54:55.355439343 +0000 UTC m=+1238.485131944" observedRunningTime="2025-10-08 06:54:56.298013522 +0000 UTC m=+1239.427706153" watchObservedRunningTime="2025-10-08 06:54:56.302052451 +0000 UTC m=+1239.431745052" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.461964 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.751156 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.751508 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 06:54:56 crc kubenswrapper[4958]: I1008 06:54:56.943745 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.256106 4958 generic.go:334] "Generic (PLEG): container finished" podID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerID="132b0abab668b947f400c332cb052c545883a9e927247a27f852eb2e60d4b740" exitCode=0 Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.256142 4958 generic.go:334] "Generic (PLEG): container finished" podID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerID="191608b487a17c960e5a5b5b07b05f5f89221db4f9e4d6a623962020f3ac23a1" exitCode=143 Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.256195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7431c5d0-34c1-45ee-91c1-ae8f45a37fde","Type":"ContainerDied","Data":"132b0abab668b947f400c332cb052c545883a9e927247a27f852eb2e60d4b740"} Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.256241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7431c5d0-34c1-45ee-91c1-ae8f45a37fde","Type":"ContainerDied","Data":"191608b487a17c960e5a5b5b07b05f5f89221db4f9e4d6a623962020f3ac23a1"} Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.392909 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.417265 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5hw\" (UniqueName: \"kubernetes.io/projected/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-kube-api-access-7p5hw\") pod \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.417335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-config-data\") pod \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.417378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-logs\") pod \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.417505 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-combined-ca-bundle\") pod \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\" (UID: \"7431c5d0-34c1-45ee-91c1-ae8f45a37fde\") " Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.418269 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-logs" (OuterVolumeSpecName: "logs") pod "7431c5d0-34c1-45ee-91c1-ae8f45a37fde" (UID: "7431c5d0-34c1-45ee-91c1-ae8f45a37fde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.438105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-kube-api-access-7p5hw" (OuterVolumeSpecName: "kube-api-access-7p5hw") pod "7431c5d0-34c1-45ee-91c1-ae8f45a37fde" (UID: "7431c5d0-34c1-45ee-91c1-ae8f45a37fde"). InnerVolumeSpecName "kube-api-access-7p5hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.456981 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-config-data" (OuterVolumeSpecName: "config-data") pod "7431c5d0-34c1-45ee-91c1-ae8f45a37fde" (UID: "7431c5d0-34c1-45ee-91c1-ae8f45a37fde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.482639 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7431c5d0-34c1-45ee-91c1-ae8f45a37fde" (UID: "7431c5d0-34c1-45ee-91c1-ae8f45a37fde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.519206 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.519237 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5hw\" (UniqueName: \"kubernetes.io/projected/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-kube-api-access-7p5hw\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.519248 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:57 crc kubenswrapper[4958]: I1008 06:54:57.519258 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7431c5d0-34c1-45ee-91c1-ae8f45a37fde-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:54:57 crc kubenswrapper[4958]: E1008 06:54:57.642189 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7431c5d0_34c1_45ee_91c1_ae8f45a37fde.slice/crio-55dd44705338ef97f84f7948c1d5a2b4506ac899a14ff82e0a4786fdf9973b16\": RecentStats: unable to find data in memory cache]" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.267593 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7431c5d0-34c1-45ee-91c1-ae8f45a37fde","Type":"ContainerDied","Data":"55dd44705338ef97f84f7948c1d5a2b4506ac899a14ff82e0a4786fdf9973b16"} Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.267987 4958 scope.go:117] "RemoveContainer" containerID="132b0abab668b947f400c332cb052c545883a9e927247a27f852eb2e60d4b740" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.267655 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.298532 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.306405 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.306896 4958 scope.go:117] "RemoveContainer" containerID="191608b487a17c960e5a5b5b07b05f5f89221db4f9e4d6a623962020f3ac23a1" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.319037 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:58 crc kubenswrapper[4958]: E1008 06:54:58.319976 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-log" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.319994 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-log" Oct 08 06:54:58 crc kubenswrapper[4958]: E1008 06:54:58.320014 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-metadata" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.320021 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-metadata" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.320196 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-metadata" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.320349 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" containerName="nova-metadata-log" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.321371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.329252 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.329677 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.335569 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbm9d\" (UniqueName: \"kubernetes.io/projected/3d946542-3346-4185-ba40-2ff620536811-kube-api-access-fbm9d\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.335607 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.335685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.335736 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-config-data\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.335757 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d946542-3346-4185-ba40-2ff620536811-logs\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.380004 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.437599 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.437663 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbm9d\" (UniqueName: \"kubernetes.io/projected/3d946542-3346-4185-ba40-2ff620536811-kube-api-access-fbm9d\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.437790 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.437869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-config-data\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.437922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d946542-3346-4185-ba40-2ff620536811-logs\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.438571 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d946542-3346-4185-ba40-2ff620536811-logs\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.444483 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.450247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-config-data\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.454181 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.455102 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbm9d\" (UniqueName: \"kubernetes.io/projected/3d946542-3346-4185-ba40-2ff620536811-kube-api-access-fbm9d\") pod \"nova-metadata-0\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " pod="openstack/nova-metadata-0" Oct 08 06:54:58 crc kubenswrapper[4958]: I1008 06:54:58.662478 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:54:59 crc kubenswrapper[4958]: I1008 06:54:59.221051 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:54:59 crc kubenswrapper[4958]: W1008 06:54:59.231929 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d946542_3346_4185_ba40_2ff620536811.slice/crio-bdfe1027107355b9a62502d72bb44f706ee3687be48b02927048e273b4012839 WatchSource:0}: Error finding container bdfe1027107355b9a62502d72bb44f706ee3687be48b02927048e273b4012839: Status 404 returned error can't find the container with id bdfe1027107355b9a62502d72bb44f706ee3687be48b02927048e273b4012839 Oct 08 06:54:59 crc kubenswrapper[4958]: I1008 06:54:59.289678 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d946542-3346-4185-ba40-2ff620536811","Type":"ContainerStarted","Data":"bdfe1027107355b9a62502d72bb44f706ee3687be48b02927048e273b4012839"} Oct 08 06:54:59 crc kubenswrapper[4958]: I1008 06:54:59.596442 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7431c5d0-34c1-45ee-91c1-ae8f45a37fde" path="/var/lib/kubelet/pods/7431c5d0-34c1-45ee-91c1-ae8f45a37fde/volumes" Oct 08 06:55:00 crc kubenswrapper[4958]: I1008 06:55:00.308203 4958 generic.go:334] "Generic (PLEG): container finished" podID="bc06996f-e37d-472a-8912-683dbc0049a5" containerID="49b47029744e00f851d977897200407e71e2e9e3b92956a658ccc8cd97f69ac5" exitCode=0 Oct 08 06:55:00 crc kubenswrapper[4958]: I1008 06:55:00.308352 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k6wwq" event={"ID":"bc06996f-e37d-472a-8912-683dbc0049a5","Type":"ContainerDied","Data":"49b47029744e00f851d977897200407e71e2e9e3b92956a658ccc8cd97f69ac5"} Oct 08 06:55:00 crc kubenswrapper[4958]: I1008 06:55:00.310789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d946542-3346-4185-ba40-2ff620536811","Type":"ContainerStarted","Data":"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446"} Oct 08 06:55:00 crc kubenswrapper[4958]: I1008 06:55:00.310829 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d946542-3346-4185-ba40-2ff620536811","Type":"ContainerStarted","Data":"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b"} Oct 08 06:55:00 crc kubenswrapper[4958]: I1008 06:55:00.375794 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.375761425 podStartE2EDuration="2.375761425s" podCreationTimestamp="2025-10-08 06:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:00.360491143 +0000 UTC m=+1243.490183774" watchObservedRunningTime="2025-10-08 06:55:00.375761425 +0000 UTC m=+1243.505454056" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.325291 4958 generic.go:334] "Generic (PLEG): container finished" podID="25549551-2ec2-4cf2-800a-b3da40ce78f0" containerID="6666c45dd7dc4715d5ebc25e32f2ae319ed58010b5920cf78661acf46c71541b" exitCode=0 Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.325391 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qs945" event={"ID":"25549551-2ec2-4cf2-800a-b3da40ce78f0","Type":"ContainerDied","Data":"6666c45dd7dc4715d5ebc25e32f2ae319ed58010b5920cf78661acf46c71541b"} Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.462090 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.501618 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.744420 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.802893 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.802990 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.813178 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-scripts\") pod \"bc06996f-e37d-472a-8912-683dbc0049a5\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.813318 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-config-data\") pod \"bc06996f-e37d-472a-8912-683dbc0049a5\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.813414 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-combined-ca-bundle\") pod \"bc06996f-e37d-472a-8912-683dbc0049a5\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.813602 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qplxz\" (UniqueName: \"kubernetes.io/projected/bc06996f-e37d-472a-8912-683dbc0049a5-kube-api-access-qplxz\") pod \"bc06996f-e37d-472a-8912-683dbc0049a5\" (UID: \"bc06996f-e37d-472a-8912-683dbc0049a5\") " Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.823538 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-scripts" (OuterVolumeSpecName: "scripts") pod "bc06996f-e37d-472a-8912-683dbc0049a5" (UID: "bc06996f-e37d-472a-8912-683dbc0049a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.824049 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc06996f-e37d-472a-8912-683dbc0049a5-kube-api-access-qplxz" (OuterVolumeSpecName: "kube-api-access-qplxz") pod "bc06996f-e37d-472a-8912-683dbc0049a5" (UID: "bc06996f-e37d-472a-8912-683dbc0049a5"). InnerVolumeSpecName "kube-api-access-qplxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.840203 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.883071 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc06996f-e37d-472a-8912-683dbc0049a5" (UID: "bc06996f-e37d-472a-8912-683dbc0049a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.911458 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-config-data" (OuterVolumeSpecName: "config-data") pod "bc06996f-e37d-472a-8912-683dbc0049a5" (UID: "bc06996f-e37d-472a-8912-683dbc0049a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.916229 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qplxz\" (UniqueName: \"kubernetes.io/projected/bc06996f-e37d-472a-8912-683dbc0049a5-kube-api-access-qplxz\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.916260 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.916269 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.916278 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc06996f-e37d-472a-8912-683dbc0049a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.960077 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-h5t57"] Oct 08 06:55:01 crc kubenswrapper[4958]: I1008 06:55:01.960335 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerName="dnsmasq-dns" containerID="cri-o://029ca8abfaf31b17ac5aed49a82e63c7ea88669308d49b70385f7b6663bd95b1" gracePeriod=10 Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.348630 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerID="029ca8abfaf31b17ac5aed49a82e63c7ea88669308d49b70385f7b6663bd95b1" exitCode=0 Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.348686 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" event={"ID":"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0","Type":"ContainerDied","Data":"029ca8abfaf31b17ac5aed49a82e63c7ea88669308d49b70385f7b6663bd95b1"} Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.355421 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k6wwq" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.357888 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k6wwq" event={"ID":"bc06996f-e37d-472a-8912-683dbc0049a5","Type":"ContainerDied","Data":"461d2ad4054c4aef881fc3c85e447cd1efee340fb65e399369d7b583589a79b9"} Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.357919 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461d2ad4054c4aef881fc3c85e447cd1efee340fb65e399369d7b583589a79b9" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.390181 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.405522 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.423688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-swift-storage-0\") pod \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.423753 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-nb\") pod \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.423853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-sb\") pod \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.423935 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-config\") pod \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.424123 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjjld\" (UniqueName: \"kubernetes.io/projected/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-kube-api-access-hjjld\") pod \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.424250 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-svc\") pod \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\" (UID: \"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.452131 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-kube-api-access-hjjld" (OuterVolumeSpecName: "kube-api-access-hjjld") pod "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" (UID: "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0"). InnerVolumeSpecName "kube-api-access-hjjld". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.485294 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" (UID: "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.508150 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-config" (OuterVolumeSpecName: "config") pod "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" (UID: "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.512973 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" (UID: "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.529467 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.529775 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.529792 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.529847 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjjld\" (UniqueName: \"kubernetes.io/projected/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-kube-api-access-hjjld\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.541319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" (UID: "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.545059 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.545391 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-log" containerID="cri-o://9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a" gracePeriod=30 Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.545453 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-api" containerID="cri-o://390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8" gracePeriod=30 Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.563379 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.563563 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-log" containerID="cri-o://48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b" gracePeriod=30 Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.564036 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-metadata" containerID="cri-o://055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446" gracePeriod=30 Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.568557 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" (UID: "2d6dbb43-cc4d-47e5-97c6-76d1638d50e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.577991 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": EOF" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.577994 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": EOF" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.632241 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.632264 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.854788 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.937897 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-scripts\") pod \"25549551-2ec2-4cf2-800a-b3da40ce78f0\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.937990 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-config-data\") pod \"25549551-2ec2-4cf2-800a-b3da40ce78f0\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.938070 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-combined-ca-bundle\") pod \"25549551-2ec2-4cf2-800a-b3da40ce78f0\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.938118 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbrr7\" (UniqueName: \"kubernetes.io/projected/25549551-2ec2-4cf2-800a-b3da40ce78f0-kube-api-access-gbrr7\") pod \"25549551-2ec2-4cf2-800a-b3da40ce78f0\" (UID: \"25549551-2ec2-4cf2-800a-b3da40ce78f0\") " Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.942923 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25549551-2ec2-4cf2-800a-b3da40ce78f0-kube-api-access-gbrr7" (OuterVolumeSpecName: "kube-api-access-gbrr7") pod "25549551-2ec2-4cf2-800a-b3da40ce78f0" (UID: "25549551-2ec2-4cf2-800a-b3da40ce78f0"). InnerVolumeSpecName "kube-api-access-gbrr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.951602 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-scripts" (OuterVolumeSpecName: "scripts") pod "25549551-2ec2-4cf2-800a-b3da40ce78f0" (UID: "25549551-2ec2-4cf2-800a-b3da40ce78f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.972206 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.978937 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25549551-2ec2-4cf2-800a-b3da40ce78f0" (UID: "25549551-2ec2-4cf2-800a-b3da40ce78f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:02 crc kubenswrapper[4958]: I1008 06:55:02.989419 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-config-data" (OuterVolumeSpecName: "config-data") pod "25549551-2ec2-4cf2-800a-b3da40ce78f0" (UID: "25549551-2ec2-4cf2-800a-b3da40ce78f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.039566 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.039599 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbrr7\" (UniqueName: \"kubernetes.io/projected/25549551-2ec2-4cf2-800a-b3da40ce78f0-kube-api-access-gbrr7\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.039645 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.039659 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25549551-2ec2-4cf2-800a-b3da40ce78f0-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.118399 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.245552 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-nova-metadata-tls-certs\") pod \"3d946542-3346-4185-ba40-2ff620536811\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.245883 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-combined-ca-bundle\") pod \"3d946542-3346-4185-ba40-2ff620536811\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.245980 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d946542-3346-4185-ba40-2ff620536811-logs\") pod \"3d946542-3346-4185-ba40-2ff620536811\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.246025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-config-data\") pod \"3d946542-3346-4185-ba40-2ff620536811\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.246152 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbm9d\" (UniqueName: \"kubernetes.io/projected/3d946542-3346-4185-ba40-2ff620536811-kube-api-access-fbm9d\") pod \"3d946542-3346-4185-ba40-2ff620536811\" (UID: \"3d946542-3346-4185-ba40-2ff620536811\") " Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.246645 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d946542-3346-4185-ba40-2ff620536811-logs" (OuterVolumeSpecName: "logs") pod "3d946542-3346-4185-ba40-2ff620536811" (UID: "3d946542-3346-4185-ba40-2ff620536811"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.264156 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d946542-3346-4185-ba40-2ff620536811-kube-api-access-fbm9d" (OuterVolumeSpecName: "kube-api-access-fbm9d") pod "3d946542-3346-4185-ba40-2ff620536811" (UID: "3d946542-3346-4185-ba40-2ff620536811"). InnerVolumeSpecName "kube-api-access-fbm9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.332093 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3d946542-3346-4185-ba40-2ff620536811" (UID: "3d946542-3346-4185-ba40-2ff620536811"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.343111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d946542-3346-4185-ba40-2ff620536811" (UID: "3d946542-3346-4185-ba40-2ff620536811"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.346152 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-config-data" (OuterVolumeSpecName: "config-data") pod "3d946542-3346-4185-ba40-2ff620536811" (UID: "3d946542-3346-4185-ba40-2ff620536811"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.347868 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.347896 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.347906 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d946542-3346-4185-ba40-2ff620536811-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.347915 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d946542-3346-4185-ba40-2ff620536811-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.347922 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbm9d\" (UniqueName: \"kubernetes.io/projected/3d946542-3346-4185-ba40-2ff620536811-kube-api-access-fbm9d\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.367985 4958 generic.go:334] "Generic (PLEG): container finished" podID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerID="9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a" exitCode=143 Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.368038 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0597eccc-ad38-4d42-900c-4bf0c61984c6","Type":"ContainerDied","Data":"9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a"} Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.376384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" event={"ID":"2d6dbb43-cc4d-47e5-97c6-76d1638d50e0","Type":"ContainerDied","Data":"ad0ee06a20737348d29935fac69ad248e8b2d92456459ab81f9f23750c85af32"} Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.376429 4958 scope.go:117] "RemoveContainer" containerID="029ca8abfaf31b17ac5aed49a82e63c7ea88669308d49b70385f7b6663bd95b1" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.376549 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bd785c49-h5t57" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.387164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qs945" event={"ID":"25549551-2ec2-4cf2-800a-b3da40ce78f0","Type":"ContainerDied","Data":"e462cc5692ecf0c4b3be54d52145af91eb1b28374af35cf70198756d9f4b3bbd"} Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.387210 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e462cc5692ecf0c4b3be54d52145af91eb1b28374af35cf70198756d9f4b3bbd" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.387268 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qs945" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.418144 4958 scope.go:117] "RemoveContainer" containerID="7252a20e8b89a3faceec9505d6ce02b27827cd7944c4322b31f28639ca45374a" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.423654 4958 generic.go:334] "Generic (PLEG): container finished" podID="3d946542-3346-4185-ba40-2ff620536811" containerID="055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446" exitCode=0 Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.423687 4958 generic.go:334] "Generic (PLEG): container finished" podID="3d946542-3346-4185-ba40-2ff620536811" containerID="48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b" exitCode=143 Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.424551 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.428679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d946542-3346-4185-ba40-2ff620536811","Type":"ContainerDied","Data":"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446"} Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.428713 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d946542-3346-4185-ba40-2ff620536811","Type":"ContainerDied","Data":"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b"} Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.428724 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d946542-3346-4185-ba40-2ff620536811","Type":"ContainerDied","Data":"bdfe1027107355b9a62502d72bb44f706ee3687be48b02927048e273b4012839"} Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.472059 4958 scope.go:117] "RemoveContainer" containerID="055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.472175 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-h5t57"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.481135 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bd785c49-h5t57"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.491047 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.504244 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.518717 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.519140 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25549551-2ec2-4cf2-800a-b3da40ce78f0" containerName="nova-cell1-conductor-db-sync" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519152 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="25549551-2ec2-4cf2-800a-b3da40ce78f0" containerName="nova-cell1-conductor-db-sync" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.519166 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerName="init" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519172 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerName="init" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.519196 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc06996f-e37d-472a-8912-683dbc0049a5" containerName="nova-manage" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519203 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc06996f-e37d-472a-8912-683dbc0049a5" containerName="nova-manage" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.519219 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-metadata" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519224 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-metadata" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.519236 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerName="dnsmasq-dns" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519242 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerName="dnsmasq-dns" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.519252 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-log" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519258 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-log" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519436 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-log" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519451 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc06996f-e37d-472a-8912-683dbc0049a5" containerName="nova-manage" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519461 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" containerName="dnsmasq-dns" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519472 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d946542-3346-4185-ba40-2ff620536811" containerName="nova-metadata-metadata" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.519483 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="25549551-2ec2-4cf2-800a-b3da40ce78f0" containerName="nova-cell1-conductor-db-sync" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.520088 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.522097 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.524620 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.526154 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.527590 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.528310 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.532079 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.534823 4958 scope.go:117] "RemoveContainer" containerID="48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.546398 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.568444 4958 scope.go:117] "RemoveContainer" containerID="055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.568875 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446\": container with ID starting with 055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446 not found: ID does not exist" containerID="055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.568902 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446"} err="failed to get container status \"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446\": rpc error: code = NotFound desc = could not find container \"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446\": container with ID starting with 055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446 not found: ID does not exist" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.568922 4958 scope.go:117] "RemoveContainer" containerID="48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b" Oct 08 06:55:03 crc kubenswrapper[4958]: E1008 06:55:03.569598 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b\": container with ID starting with 48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b not found: ID does not exist" containerID="48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.569623 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b"} err="failed to get container status \"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b\": rpc error: code = NotFound desc = could not find container \"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b\": container with ID starting with 48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b not found: ID does not exist" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.569637 4958 scope.go:117] "RemoveContainer" containerID="055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.569801 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446"} err="failed to get container status \"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446\": rpc error: code = NotFound desc = could not find container \"055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446\": container with ID starting with 055dcefaecfbd5b88f10ea8c936d088cdc18103ba92f564fc0201a2b62d5b446 not found: ID does not exist" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.569817 4958 scope.go:117] "RemoveContainer" containerID="48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.570018 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b"} err="failed to get container status \"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b\": rpc error: code = NotFound desc = could not find container \"48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b\": container with ID starting with 48b9ea66aefafc0bf3379afd67bc213fe965018cfff03743eccc5fd531ad4a9b not found: ID does not exist" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.593831 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6dbb43-cc4d-47e5-97c6-76d1638d50e0" path="/var/lib/kubelet/pods/2d6dbb43-cc4d-47e5-97c6-76d1638d50e0/volumes" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.594446 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d946542-3346-4185-ba40-2ff620536811" path="/var/lib/kubelet/pods/3d946542-3346-4185-ba40-2ff620536811/volumes" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.652937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eec9f7a-946e-405d-85af-8db9b57626e4-logs\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653051 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653121 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gcb\" (UniqueName: \"kubernetes.io/projected/6eec9f7a-946e-405d-85af-8db9b57626e4-kube-api-access-42gcb\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653179 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653230 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw2dd\" (UniqueName: \"kubernetes.io/projected/f196852b-bfdf-43dd-9579-3ecd8601e7bf-kube-api-access-rw2dd\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-config-data\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.653298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755437 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755514 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gcb\" (UniqueName: \"kubernetes.io/projected/6eec9f7a-946e-405d-85af-8db9b57626e4-kube-api-access-42gcb\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw2dd\" (UniqueName: \"kubernetes.io/projected/f196852b-bfdf-43dd-9579-3ecd8601e7bf-kube-api-access-rw2dd\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-config-data\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.755674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eec9f7a-946e-405d-85af-8db9b57626e4-logs\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.756308 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eec9f7a-946e-405d-85af-8db9b57626e4-logs\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.758834 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.759396 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.759676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.759956 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.760842 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-config-data\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.775291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw2dd\" (UniqueName: \"kubernetes.io/projected/f196852b-bfdf-43dd-9579-3ecd8601e7bf-kube-api-access-rw2dd\") pod \"nova-cell1-conductor-0\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.783515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gcb\" (UniqueName: \"kubernetes.io/projected/6eec9f7a-946e-405d-85af-8db9b57626e4-kube-api-access-42gcb\") pod \"nova-metadata-0\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " pod="openstack/nova-metadata-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.859118 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:03 crc kubenswrapper[4958]: I1008 06:55:03.861339 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:04 crc kubenswrapper[4958]: I1008 06:55:04.346211 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 06:55:04 crc kubenswrapper[4958]: W1008 06:55:04.353230 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf196852b_bfdf_43dd_9579_3ecd8601e7bf.slice/crio-62cefbec2a6d0b04edcde390861946520e12b2c889703b44789b730f2e93054d WatchSource:0}: Error finding container 62cefbec2a6d0b04edcde390861946520e12b2c889703b44789b730f2e93054d: Status 404 returned error can't find the container with id 62cefbec2a6d0b04edcde390861946520e12b2c889703b44789b730f2e93054d Oct 08 06:55:04 crc kubenswrapper[4958]: I1008 06:55:04.439055 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f196852b-bfdf-43dd-9579-3ecd8601e7bf","Type":"ContainerStarted","Data":"62cefbec2a6d0b04edcde390861946520e12b2c889703b44789b730f2e93054d"} Oct 08 06:55:04 crc kubenswrapper[4958]: I1008 06:55:04.439423 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:04 crc kubenswrapper[4958]: I1008 06:55:04.443471 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="002517c1-bdf3-4f0d-b525-735b3fccd163" containerName="nova-scheduler-scheduler" containerID="cri-o://d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" gracePeriod=30 Oct 08 06:55:05 crc kubenswrapper[4958]: I1008 06:55:05.470378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f196852b-bfdf-43dd-9579-3ecd8601e7bf","Type":"ContainerStarted","Data":"6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e"} Oct 08 06:55:05 crc kubenswrapper[4958]: I1008 06:55:05.477138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6eec9f7a-946e-405d-85af-8db9b57626e4","Type":"ContainerStarted","Data":"94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366"} Oct 08 06:55:05 crc kubenswrapper[4958]: I1008 06:55:05.477264 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6eec9f7a-946e-405d-85af-8db9b57626e4","Type":"ContainerStarted","Data":"91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897"} Oct 08 06:55:05 crc kubenswrapper[4958]: I1008 06:55:05.477286 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6eec9f7a-946e-405d-85af-8db9b57626e4","Type":"ContainerStarted","Data":"eb4159cf73dbefddf2a90eabddfae32170c10ffe92ff4905dccdd7e16c272ad9"} Oct 08 06:55:05 crc kubenswrapper[4958]: I1008 06:55:05.514910 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.514883281 podStartE2EDuration="2.514883281s" podCreationTimestamp="2025-10-08 06:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:05.497860232 +0000 UTC m=+1248.627552873" watchObservedRunningTime="2025-10-08 06:55:05.514883281 +0000 UTC m=+1248.644575912" Oct 08 06:55:05 crc kubenswrapper[4958]: I1008 06:55:05.532644 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.53261933 podStartE2EDuration="2.53261933s" podCreationTimestamp="2025-10-08 06:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:05.529205938 +0000 UTC m=+1248.658898579" watchObservedRunningTime="2025-10-08 06:55:05.53261933 +0000 UTC m=+1248.662311961" Oct 08 06:55:06 crc kubenswrapper[4958]: E1008 06:55:06.464592 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 06:55:06 crc kubenswrapper[4958]: E1008 06:55:06.471444 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 06:55:06 crc kubenswrapper[4958]: E1008 06:55:06.473918 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 06:55:06 crc kubenswrapper[4958]: E1008 06:55:06.474222 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="002517c1-bdf3-4f0d-b525-735b3fccd163" containerName="nova-scheduler-scheduler" Oct 08 06:55:06 crc kubenswrapper[4958]: I1008 06:55:06.492580 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:06 crc kubenswrapper[4958]: I1008 06:55:06.844428 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:55:06 crc kubenswrapper[4958]: I1008 06:55:06.844493 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.315741 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.428987 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-combined-ca-bundle\") pod \"002517c1-bdf3-4f0d-b525-735b3fccd163\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.429335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-config-data\") pod \"002517c1-bdf3-4f0d-b525-735b3fccd163\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.429400 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lmlc\" (UniqueName: \"kubernetes.io/projected/002517c1-bdf3-4f0d-b525-735b3fccd163-kube-api-access-6lmlc\") pod \"002517c1-bdf3-4f0d-b525-735b3fccd163\" (UID: \"002517c1-bdf3-4f0d-b525-735b3fccd163\") " Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.436989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/002517c1-bdf3-4f0d-b525-735b3fccd163-kube-api-access-6lmlc" (OuterVolumeSpecName: "kube-api-access-6lmlc") pod "002517c1-bdf3-4f0d-b525-735b3fccd163" (UID: "002517c1-bdf3-4f0d-b525-735b3fccd163"). InnerVolumeSpecName "kube-api-access-6lmlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.467105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "002517c1-bdf3-4f0d-b525-735b3fccd163" (UID: "002517c1-bdf3-4f0d-b525-735b3fccd163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.479187 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-config-data" (OuterVolumeSpecName: "config-data") pod "002517c1-bdf3-4f0d-b525-735b3fccd163" (UID: "002517c1-bdf3-4f0d-b525-735b3fccd163"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.505316 4958 generic.go:334] "Generic (PLEG): container finished" podID="002517c1-bdf3-4f0d-b525-735b3fccd163" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" exitCode=0 Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.505386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"002517c1-bdf3-4f0d-b525-735b3fccd163","Type":"ContainerDied","Data":"d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b"} Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.505404 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.505434 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"002517c1-bdf3-4f0d-b525-735b3fccd163","Type":"ContainerDied","Data":"69fac8fd878c6cd76e75b692c60f654a8337d19ec09ebafa99af205f67e8947e"} Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.505467 4958 scope.go:117] "RemoveContainer" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.532271 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.532303 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/002517c1-bdf3-4f0d-b525-735b3fccd163-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.532316 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lmlc\" (UniqueName: \"kubernetes.io/projected/002517c1-bdf3-4f0d-b525-735b3fccd163-kube-api-access-6lmlc\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.559177 4958 scope.go:117] "RemoveContainer" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.559660 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:07 crc kubenswrapper[4958]: E1008 06:55:07.560448 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b\": container with ID starting with d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b not found: ID does not exist" containerID="d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.560487 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b"} err="failed to get container status \"d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b\": rpc error: code = NotFound desc = could not find container \"d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b\": container with ID starting with d8210b4afeee51abfe74cb3e8ef746944ae8b13c2b920e31c52b07a8e359c31b not found: ID does not exist" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.591711 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.597932 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:07 crc kubenswrapper[4958]: E1008 06:55:07.598401 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002517c1-bdf3-4f0d-b525-735b3fccd163" containerName="nova-scheduler-scheduler" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.598425 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="002517c1-bdf3-4f0d-b525-735b3fccd163" containerName="nova-scheduler-scheduler" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.598688 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="002517c1-bdf3-4f0d-b525-735b3fccd163" containerName="nova-scheduler-scheduler" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.599448 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.601985 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.606977 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.735710 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.736567 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-config-data\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.736656 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlhv\" (UniqueName: \"kubernetes.io/projected/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-kube-api-access-pnlhv\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.837869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlhv\" (UniqueName: \"kubernetes.io/projected/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-kube-api-access-pnlhv\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.838005 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.838633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-config-data\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.842493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-config-data\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.844679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.859506 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlhv\" (UniqueName: \"kubernetes.io/projected/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-kube-api-access-pnlhv\") pod \"nova-scheduler-0\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:07 crc kubenswrapper[4958]: I1008 06:55:07.918326 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.389608 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:08 crc kubenswrapper[4958]: W1008 06:55:08.392937 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fad8ac8_1c24_499a_bcfd_70a4b69f976c.slice/crio-e2417cba051747bf580f26402086c437b7f54fa5b221a93a4acf856eee771dc5 WatchSource:0}: Error finding container e2417cba051747bf580f26402086c437b7f54fa5b221a93a4acf856eee771dc5: Status 404 returned error can't find the container with id e2417cba051747bf580f26402086c437b7f54fa5b221a93a4acf856eee771dc5 Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.503425 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.521004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3fad8ac8-1c24-499a-bcfd-70a4b69f976c","Type":"ContainerStarted","Data":"e2417cba051747bf580f26402086c437b7f54fa5b221a93a4acf856eee771dc5"} Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.524345 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.524378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0597eccc-ad38-4d42-900c-4bf0c61984c6","Type":"ContainerDied","Data":"390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8"} Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.524446 4958 scope.go:117] "RemoveContainer" containerID="390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.524286 4958 generic.go:334] "Generic (PLEG): container finished" podID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerID="390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8" exitCode=0 Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.527290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0597eccc-ad38-4d42-900c-4bf0c61984c6","Type":"ContainerDied","Data":"23c5f5ac06891185f223e9aae9516210a554c504aeafb932c003336ea3d8c662"} Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.567847 4958 scope.go:117] "RemoveContainer" containerID="9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.599682 4958 scope.go:117] "RemoveContainer" containerID="390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8" Oct 08 06:55:08 crc kubenswrapper[4958]: E1008 06:55:08.600178 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8\": container with ID starting with 390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8 not found: ID does not exist" containerID="390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.600244 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8"} err="failed to get container status \"390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8\": rpc error: code = NotFound desc = could not find container \"390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8\": container with ID starting with 390383b766abc0a2893f97fa7476629440dd291d0761cff448150c4e59bfc1d8 not found: ID does not exist" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.600290 4958 scope.go:117] "RemoveContainer" containerID="9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a" Oct 08 06:55:08 crc kubenswrapper[4958]: E1008 06:55:08.600756 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a\": container with ID starting with 9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a not found: ID does not exist" containerID="9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.600778 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a"} err="failed to get container status \"9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a\": rpc error: code = NotFound desc = could not find container \"9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a\": container with ID starting with 9fcff044fe7a008c3af58608fb8fddf740b135034be10bd62cf252f4b42c651a not found: ID does not exist" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.652203 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-combined-ca-bundle\") pod \"0597eccc-ad38-4d42-900c-4bf0c61984c6\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.652364 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-config-data\") pod \"0597eccc-ad38-4d42-900c-4bf0c61984c6\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.652436 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4wx\" (UniqueName: \"kubernetes.io/projected/0597eccc-ad38-4d42-900c-4bf0c61984c6-kube-api-access-vb4wx\") pod \"0597eccc-ad38-4d42-900c-4bf0c61984c6\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.652561 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0597eccc-ad38-4d42-900c-4bf0c61984c6-logs\") pod \"0597eccc-ad38-4d42-900c-4bf0c61984c6\" (UID: \"0597eccc-ad38-4d42-900c-4bf0c61984c6\") " Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.653299 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0597eccc-ad38-4d42-900c-4bf0c61984c6-logs" (OuterVolumeSpecName: "logs") pod "0597eccc-ad38-4d42-900c-4bf0c61984c6" (UID: "0597eccc-ad38-4d42-900c-4bf0c61984c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.658938 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0597eccc-ad38-4d42-900c-4bf0c61984c6-kube-api-access-vb4wx" (OuterVolumeSpecName: "kube-api-access-vb4wx") pod "0597eccc-ad38-4d42-900c-4bf0c61984c6" (UID: "0597eccc-ad38-4d42-900c-4bf0c61984c6"). InnerVolumeSpecName "kube-api-access-vb4wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.700738 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-config-data" (OuterVolumeSpecName: "config-data") pod "0597eccc-ad38-4d42-900c-4bf0c61984c6" (UID: "0597eccc-ad38-4d42-900c-4bf0c61984c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.708431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0597eccc-ad38-4d42-900c-4bf0c61984c6" (UID: "0597eccc-ad38-4d42-900c-4bf0c61984c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.755817 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0597eccc-ad38-4d42-900c-4bf0c61984c6-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.755868 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.755892 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0597eccc-ad38-4d42-900c-4bf0c61984c6-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.755910 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4wx\" (UniqueName: \"kubernetes.io/projected/0597eccc-ad38-4d42-900c-4bf0c61984c6-kube-api-access-vb4wx\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.861496 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.861557 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.887681 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.921245 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.936326 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:08 crc kubenswrapper[4958]: E1008 06:55:08.950882 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-log" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.950981 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-log" Oct 08 06:55:08 crc kubenswrapper[4958]: E1008 06:55:08.951078 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-api" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.951099 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-api" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.951716 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-api" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.951813 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" containerName="nova-api-log" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.953451 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.953568 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:08 crc kubenswrapper[4958]: I1008 06:55:08.956735 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.062031 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.062359 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c6864d7-7709-42a3-b6a0-3cb828949685-logs\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.062433 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-config-data\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.062553 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bpp\" (UniqueName: \"kubernetes.io/projected/5c6864d7-7709-42a3-b6a0-3cb828949685-kube-api-access-j2bpp\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.164764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-config-data\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.165020 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2bpp\" (UniqueName: \"kubernetes.io/projected/5c6864d7-7709-42a3-b6a0-3cb828949685-kube-api-access-j2bpp\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.165131 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.165207 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c6864d7-7709-42a3-b6a0-3cb828949685-logs\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.165873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c6864d7-7709-42a3-b6a0-3cb828949685-logs\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.172889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-config-data\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.174646 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.200536 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2bpp\" (UniqueName: \"kubernetes.io/projected/5c6864d7-7709-42a3-b6a0-3cb828949685-kube-api-access-j2bpp\") pod \"nova-api-0\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.300875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.548043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3fad8ac8-1c24-499a-bcfd-70a4b69f976c","Type":"ContainerStarted","Data":"8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8"} Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.586064 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="002517c1-bdf3-4f0d-b525-735b3fccd163" path="/var/lib/kubelet/pods/002517c1-bdf3-4f0d-b525-735b3fccd163/volumes" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.586725 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0597eccc-ad38-4d42-900c-4bf0c61984c6" path="/var/lib/kubelet/pods/0597eccc-ad38-4d42-900c-4bf0c61984c6/volumes" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.798726 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7987107030000002 podStartE2EDuration="2.798710703s" podCreationTimestamp="2025-10-08 06:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:09.576111928 +0000 UTC m=+1252.705804629" watchObservedRunningTime="2025-10-08 06:55:09.798710703 +0000 UTC m=+1252.928403304" Oct 08 06:55:09 crc kubenswrapper[4958]: I1008 06:55:09.804432 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:10 crc kubenswrapper[4958]: I1008 06:55:10.562146 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c6864d7-7709-42a3-b6a0-3cb828949685","Type":"ContainerStarted","Data":"0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591"} Oct 08 06:55:10 crc kubenswrapper[4958]: I1008 06:55:10.562457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c6864d7-7709-42a3-b6a0-3cb828949685","Type":"ContainerStarted","Data":"65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9"} Oct 08 06:55:10 crc kubenswrapper[4958]: I1008 06:55:10.562473 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c6864d7-7709-42a3-b6a0-3cb828949685","Type":"ContainerStarted","Data":"9d84ea48e17d9953089806d6521e1a21779a7d31d86fd8fbfd9ba21984846609"} Oct 08 06:55:10 crc kubenswrapper[4958]: I1008 06:55:10.585820 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.585798958 podStartE2EDuration="2.585798958s" podCreationTimestamp="2025-10-08 06:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:10.58181189 +0000 UTC m=+1253.711504491" watchObservedRunningTime="2025-10-08 06:55:10.585798958 +0000 UTC m=+1253.715491559" Oct 08 06:55:12 crc kubenswrapper[4958]: I1008 06:55:12.919450 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 06:55:13 crc kubenswrapper[4958]: I1008 06:55:13.862023 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 06:55:13 crc kubenswrapper[4958]: I1008 06:55:13.862079 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 06:55:13 crc kubenswrapper[4958]: I1008 06:55:13.907578 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 08 06:55:14 crc kubenswrapper[4958]: I1008 06:55:14.343044 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 06:55:14 crc kubenswrapper[4958]: I1008 06:55:14.879112 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:55:14 crc kubenswrapper[4958]: I1008 06:55:14.879337 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:55:17 crc kubenswrapper[4958]: I1008 06:55:17.919469 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 06:55:17 crc kubenswrapper[4958]: I1008 06:55:17.971801 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 06:55:18 crc kubenswrapper[4958]: I1008 06:55:18.697447 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 06:55:19 crc kubenswrapper[4958]: I1008 06:55:19.302153 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:55:19 crc kubenswrapper[4958]: I1008 06:55:19.302577 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:55:20 crc kubenswrapper[4958]: I1008 06:55:20.384255 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 06:55:20 crc kubenswrapper[4958]: I1008 06:55:20.384393 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 06:55:23 crc kubenswrapper[4958]: I1008 06:55:23.871083 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 06:55:23 crc kubenswrapper[4958]: I1008 06:55:23.881405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 06:55:23 crc kubenswrapper[4958]: I1008 06:55:23.881558 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 06:55:24 crc kubenswrapper[4958]: I1008 06:55:24.747989 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.759174 4958 generic.go:334] "Generic (PLEG): container finished" podID="1d377988-7ac8-4f84-adbf-28eab7e9128e" containerID="f90e9a7fc7a74d3104410e62dd42776120239fbd58e43d11e67fb278dc36af7a" exitCode=137 Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.759248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1d377988-7ac8-4f84-adbf-28eab7e9128e","Type":"ContainerDied","Data":"f90e9a7fc7a74d3104410e62dd42776120239fbd58e43d11e67fb278dc36af7a"} Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.759519 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1d377988-7ac8-4f84-adbf-28eab7e9128e","Type":"ContainerDied","Data":"3338abfa60ab474b4e945af8c07b688859483db61fec23f5dd57308d23a581b1"} Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.759534 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3338abfa60ab474b4e945af8c07b688859483db61fec23f5dd57308d23a581b1" Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.832631 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.937683 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-config-data\") pod \"1d377988-7ac8-4f84-adbf-28eab7e9128e\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.938254 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsrvj\" (UniqueName: \"kubernetes.io/projected/1d377988-7ac8-4f84-adbf-28eab7e9128e-kube-api-access-hsrvj\") pod \"1d377988-7ac8-4f84-adbf-28eab7e9128e\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.938468 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-combined-ca-bundle\") pod \"1d377988-7ac8-4f84-adbf-28eab7e9128e\" (UID: \"1d377988-7ac8-4f84-adbf-28eab7e9128e\") " Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.945842 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d377988-7ac8-4f84-adbf-28eab7e9128e-kube-api-access-hsrvj" (OuterVolumeSpecName: "kube-api-access-hsrvj") pod "1d377988-7ac8-4f84-adbf-28eab7e9128e" (UID: "1d377988-7ac8-4f84-adbf-28eab7e9128e"). InnerVolumeSpecName "kube-api-access-hsrvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.985005 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-config-data" (OuterVolumeSpecName: "config-data") pod "1d377988-7ac8-4f84-adbf-28eab7e9128e" (UID: "1d377988-7ac8-4f84-adbf-28eab7e9128e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:26 crc kubenswrapper[4958]: I1008 06:55:26.987656 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d377988-7ac8-4f84-adbf-28eab7e9128e" (UID: "1d377988-7ac8-4f84-adbf-28eab7e9128e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.041282 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.041337 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d377988-7ac8-4f84-adbf-28eab7e9128e-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.041357 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsrvj\" (UniqueName: \"kubernetes.io/projected/1d377988-7ac8-4f84-adbf-28eab7e9128e-kube-api-access-hsrvj\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.770916 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.825258 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.858769 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.875480 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:55:27 crc kubenswrapper[4958]: E1008 06:55:27.876015 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d377988-7ac8-4f84-adbf-28eab7e9128e" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.876037 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d377988-7ac8-4f84-adbf-28eab7e9128e" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.876298 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d377988-7ac8-4f84-adbf-28eab7e9128e" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.877107 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.883651 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.883754 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.883774 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.898628 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.965512 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.965589 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.965684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.965739 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4m2c\" (UniqueName: \"kubernetes.io/projected/c106a441-1a40-4bee-9317-b6957f8a6c94-kube-api-access-l4m2c\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:27 crc kubenswrapper[4958]: I1008 06:55:27.965816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.067685 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.067798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.067883 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.067986 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4m2c\" (UniqueName: \"kubernetes.io/projected/c106a441-1a40-4bee-9317-b6957f8a6c94-kube-api-access-l4m2c\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.068102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.076223 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.076395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.100385 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.100736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.101307 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4m2c\" (UniqueName: \"kubernetes.io/projected/c106a441-1a40-4bee-9317-b6957f8a6c94-kube-api-access-l4m2c\") pod \"nova-cell1-novncproxy-0\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.196119 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:28 crc kubenswrapper[4958]: W1008 06:55:28.756254 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc106a441_1a40_4bee_9317_b6957f8a6c94.slice/crio-4dd51308e54947861a5e8a3f0c8a3e77502befa6f79b092e217aa68c614faf9b WatchSource:0}: Error finding container 4dd51308e54947861a5e8a3f0c8a3e77502befa6f79b092e217aa68c614faf9b: Status 404 returned error can't find the container with id 4dd51308e54947861a5e8a3f0c8a3e77502befa6f79b092e217aa68c614faf9b Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.758732 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:55:28 crc kubenswrapper[4958]: I1008 06:55:28.785802 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c106a441-1a40-4bee-9317-b6957f8a6c94","Type":"ContainerStarted","Data":"4dd51308e54947861a5e8a3f0c8a3e77502befa6f79b092e217aa68c614faf9b"} Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.308278 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.309646 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.312156 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.317705 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.588107 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d377988-7ac8-4f84-adbf-28eab7e9128e" path="/var/lib/kubelet/pods/1d377988-7ac8-4f84-adbf-28eab7e9128e/volumes" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.799303 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c106a441-1a40-4bee-9317-b6957f8a6c94","Type":"ContainerStarted","Data":"455c494d9988f4d2b518099bd3cb05171a0ceaffd36f37f6d69ded313d75a5e2"} Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.799658 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.811017 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.830919 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.830889163 podStartE2EDuration="2.830889163s" podCreationTimestamp="2025-10-08 06:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:29.818043897 +0000 UTC m=+1272.947736508" watchObservedRunningTime="2025-10-08 06:55:29.830889163 +0000 UTC m=+1272.960581804" Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.977502 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-5mtl4"] Oct 08 06:55:29 crc kubenswrapper[4958]: I1008 06:55:29.979166 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.004440 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-5mtl4"] Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.107090 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.107354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.107381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.107401 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-config\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.107523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.107764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchdp\" (UniqueName: \"kubernetes.io/projected/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-kube-api-access-nchdp\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.210149 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.210222 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.210251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.210324 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-config\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.210408 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.210598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchdp\" (UniqueName: \"kubernetes.io/projected/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-kube-api-access-nchdp\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.211344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-svc\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.211351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-config\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.211372 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.211372 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.211836 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.244571 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchdp\" (UniqueName: \"kubernetes.io/projected/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-kube-api-access-nchdp\") pod \"dnsmasq-dns-6d4d96bb9-5mtl4\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.298115 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.592519 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-5mtl4"] Oct 08 06:55:30 crc kubenswrapper[4958]: W1008 06:55:30.609585 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e3061c_ca6e_43c6_ba1d_2520f28142c6.slice/crio-0f9e9c67892ab00ebc5c5e0f638b4951040a5238cff42c494fd70929b5d150b3 WatchSource:0}: Error finding container 0f9e9c67892ab00ebc5c5e0f638b4951040a5238cff42c494fd70929b5d150b3: Status 404 returned error can't find the container with id 0f9e9c67892ab00ebc5c5e0f638b4951040a5238cff42c494fd70929b5d150b3 Oct 08 06:55:30 crc kubenswrapper[4958]: I1008 06:55:30.809325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" event={"ID":"b3e3061c-ca6e-43c6-ba1d-2520f28142c6","Type":"ContainerStarted","Data":"0f9e9c67892ab00ebc5c5e0f638b4951040a5238cff42c494fd70929b5d150b3"} Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.592529 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.593237 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-central-agent" containerID="cri-o://8da3c05f804e73d89736d57dce94c00953045d8d97c67b3648b3714dada8f2d9" gracePeriod=30 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.593323 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-notification-agent" containerID="cri-o://f2b5c325174ca55a506e0030b61af4ddc1024765cfa6d2acbeb2242623a1ed64" gracePeriod=30 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.593343 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="proxy-httpd" containerID="cri-o://0e19d06b3e926c69b54e43c11dcfc7de9d7a5f8624c873e00af289abda321d5e" gracePeriod=30 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.593324 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="sg-core" containerID="cri-o://d4bb3873a7309661f2e990983b00bab9efc693a2acac87ec5fd0ffda963953d2" gracePeriod=30 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.825609 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8d3300d-5acc-450e-8820-219933eeac53" containerID="0e19d06b3e926c69b54e43c11dcfc7de9d7a5f8624c873e00af289abda321d5e" exitCode=0 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.825931 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8d3300d-5acc-450e-8820-219933eeac53" containerID="d4bb3873a7309661f2e990983b00bab9efc693a2acac87ec5fd0ffda963953d2" exitCode=2 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.825687 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerDied","Data":"0e19d06b3e926c69b54e43c11dcfc7de9d7a5f8624c873e00af289abda321d5e"} Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.826005 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerDied","Data":"d4bb3873a7309661f2e990983b00bab9efc693a2acac87ec5fd0ffda963953d2"} Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.827878 4958 generic.go:334] "Generic (PLEG): container finished" podID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerID="68cda18cb7a1f232876eb4eae773f044c5139e874cd3c645d33bf84495525943" exitCode=0 Oct 08 06:55:31 crc kubenswrapper[4958]: I1008 06:55:31.829027 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" event={"ID":"b3e3061c-ca6e-43c6-ba1d-2520f28142c6","Type":"ContainerDied","Data":"68cda18cb7a1f232876eb4eae773f044c5139e874cd3c645d33bf84495525943"} Oct 08 06:55:32 crc kubenswrapper[4958]: I1008 06:55:32.847937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" event={"ID":"b3e3061c-ca6e-43c6-ba1d-2520f28142c6","Type":"ContainerStarted","Data":"58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905"} Oct 08 06:55:32 crc kubenswrapper[4958]: I1008 06:55:32.848831 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:32 crc kubenswrapper[4958]: I1008 06:55:32.852417 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8d3300d-5acc-450e-8820-219933eeac53" containerID="8da3c05f804e73d89736d57dce94c00953045d8d97c67b3648b3714dada8f2d9" exitCode=0 Oct 08 06:55:32 crc kubenswrapper[4958]: I1008 06:55:32.852455 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerDied","Data":"8da3c05f804e73d89736d57dce94c00953045d8d97c67b3648b3714dada8f2d9"} Oct 08 06:55:32 crc kubenswrapper[4958]: I1008 06:55:32.885397 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" podStartSLOduration=3.885374889 podStartE2EDuration="3.885374889s" podCreationTimestamp="2025-10-08 06:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:32.873867058 +0000 UTC m=+1276.003559669" watchObservedRunningTime="2025-10-08 06:55:32.885374889 +0000 UTC m=+1276.015067490" Oct 08 06:55:33 crc kubenswrapper[4958]: I1008 06:55:33.026688 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:33 crc kubenswrapper[4958]: I1008 06:55:33.026978 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-log" containerID="cri-o://65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9" gracePeriod=30 Oct 08 06:55:33 crc kubenswrapper[4958]: I1008 06:55:33.027147 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-api" containerID="cri-o://0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591" gracePeriod=30 Oct 08 06:55:33 crc kubenswrapper[4958]: I1008 06:55:33.196679 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:33 crc kubenswrapper[4958]: I1008 06:55:33.877347 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerID="65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9" exitCode=143 Oct 08 06:55:33 crc kubenswrapper[4958]: I1008 06:55:33.877442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c6864d7-7709-42a3-b6a0-3cb828949685","Type":"ContainerDied","Data":"65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9"} Oct 08 06:55:34 crc kubenswrapper[4958]: I1008 06:55:34.896749 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8d3300d-5acc-450e-8820-219933eeac53" containerID="f2b5c325174ca55a506e0030b61af4ddc1024765cfa6d2acbeb2242623a1ed64" exitCode=0 Oct 08 06:55:34 crc kubenswrapper[4958]: I1008 06:55:34.896842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerDied","Data":"f2b5c325174ca55a506e0030b61af4ddc1024765cfa6d2acbeb2242623a1ed64"} Oct 08 06:55:34 crc kubenswrapper[4958]: I1008 06:55:34.897400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c8d3300d-5acc-450e-8820-219933eeac53","Type":"ContainerDied","Data":"5f5599584dc6386e3b09d96d641fec32d5b17f64ca818ec1c831978579f079c9"} Oct 08 06:55:34 crc kubenswrapper[4958]: I1008 06:55:34.897436 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5599584dc6386e3b09d96d641fec32d5b17f64ca818ec1c831978579f079c9" Oct 08 06:55:34 crc kubenswrapper[4958]: I1008 06:55:34.920922 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027113 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-run-httpd\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027165 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-sg-core-conf-yaml\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027263 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-combined-ca-bundle\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027291 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrrcj\" (UniqueName: \"kubernetes.io/projected/c8d3300d-5acc-450e-8820-219933eeac53-kube-api-access-lrrcj\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-config-data\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027450 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-ceilometer-tls-certs\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027480 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-log-httpd\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027543 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-scripts\") pod \"c8d3300d-5acc-450e-8820-219933eeac53\" (UID: \"c8d3300d-5acc-450e-8820-219933eeac53\") " Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.027665 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.028357 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.029390 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.036348 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-scripts" (OuterVolumeSpecName: "scripts") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.038690 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d3300d-5acc-450e-8820-219933eeac53-kube-api-access-lrrcj" (OuterVolumeSpecName: "kube-api-access-lrrcj") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "kube-api-access-lrrcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.078359 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.100279 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.130719 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.130758 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c8d3300d-5acc-450e-8820-219933eeac53-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.130770 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.130783 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.130795 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrrcj\" (UniqueName: \"kubernetes.io/projected/c8d3300d-5acc-450e-8820-219933eeac53-kube-api-access-lrrcj\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.164229 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.200900 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-config-data" (OuterVolumeSpecName: "config-data") pod "c8d3300d-5acc-450e-8820-219933eeac53" (UID: "c8d3300d-5acc-450e-8820-219933eeac53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.232630 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.232654 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d3300d-5acc-450e-8820-219933eeac53-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.907862 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.954630 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:55:35 crc kubenswrapper[4958]: I1008 06:55:35.967117 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.011616 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.012243 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="proxy-httpd" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012274 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="proxy-httpd" Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.012297 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-central-agent" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012312 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-central-agent" Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.012348 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="sg-core" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012362 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="sg-core" Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.012413 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-notification-agent" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012426 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-notification-agent" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012747 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="proxy-httpd" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012786 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-central-agent" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012823 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="ceilometer-notification-agent" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.012840 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d3300d-5acc-450e-8820-219933eeac53" containerName="sg-core" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.016568 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.020415 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.020748 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.020838 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.026806 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.153309 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-config-data\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.153396 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxnhc\" (UniqueName: \"kubernetes.io/projected/90612def-876b-4ae6-88e6-7f3de02515e6-kube-api-access-xxnhc\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.153571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.153711 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.153938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.154013 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-run-httpd\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.154041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-scripts\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.154068 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-log-httpd\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.255922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-config-data\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.257255 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxnhc\" (UniqueName: \"kubernetes.io/projected/90612def-876b-4ae6-88e6-7f3de02515e6-kube-api-access-xxnhc\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.259546 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.260541 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.262715 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-config-data\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.265746 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.267445 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.267639 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-run-httpd\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.267779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-scripts\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.267918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-log-httpd\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.268720 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-log-httpd\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.269765 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.270028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-run-httpd\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.272565 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.273117 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-scripts\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.282635 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxnhc\" (UniqueName: \"kubernetes.io/projected/90612def-876b-4ae6-88e6-7f3de02515e6-kube-api-access-xxnhc\") pod \"ceilometer-0\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.382932 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.631983 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.675052 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2bpp\" (UniqueName: \"kubernetes.io/projected/5c6864d7-7709-42a3-b6a0-3cb828949685-kube-api-access-j2bpp\") pod \"5c6864d7-7709-42a3-b6a0-3cb828949685\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.675375 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-config-data\") pod \"5c6864d7-7709-42a3-b6a0-3cb828949685\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.675402 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c6864d7-7709-42a3-b6a0-3cb828949685-logs\") pod \"5c6864d7-7709-42a3-b6a0-3cb828949685\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.675423 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-combined-ca-bundle\") pod \"5c6864d7-7709-42a3-b6a0-3cb828949685\" (UID: \"5c6864d7-7709-42a3-b6a0-3cb828949685\") " Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.675909 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6864d7-7709-42a3-b6a0-3cb828949685-logs" (OuterVolumeSpecName: "logs") pod "5c6864d7-7709-42a3-b6a0-3cb828949685" (UID: "5c6864d7-7709-42a3-b6a0-3cb828949685"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.682844 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6864d7-7709-42a3-b6a0-3cb828949685-kube-api-access-j2bpp" (OuterVolumeSpecName: "kube-api-access-j2bpp") pod "5c6864d7-7709-42a3-b6a0-3cb828949685" (UID: "5c6864d7-7709-42a3-b6a0-3cb828949685"). InnerVolumeSpecName "kube-api-access-j2bpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.710122 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c6864d7-7709-42a3-b6a0-3cb828949685" (UID: "5c6864d7-7709-42a3-b6a0-3cb828949685"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.744239 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-config-data" (OuterVolumeSpecName: "config-data") pod "5c6864d7-7709-42a3-b6a0-3cb828949685" (UID: "5c6864d7-7709-42a3-b6a0-3cb828949685"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.777033 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.777067 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c6864d7-7709-42a3-b6a0-3cb828949685-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.777076 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6864d7-7709-42a3-b6a0-3cb828949685-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.777087 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2bpp\" (UniqueName: \"kubernetes.io/projected/5c6864d7-7709-42a3-b6a0-3cb828949685-kube-api-access-j2bpp\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.844730 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.848048 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.848188 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.849169 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b130d69a8aa9d1feb209f9a1f16b0b50db741e0776429ac28dc669dd948e901"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.849247 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://5b130d69a8aa9d1feb209f9a1f16b0b50db741e0776429ac28dc669dd948e901" gracePeriod=600 Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.855308 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.919282 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerID="0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591" exitCode=0 Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.919344 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.919377 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c6864d7-7709-42a3-b6a0-3cb828949685","Type":"ContainerDied","Data":"0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591"} Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.919429 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c6864d7-7709-42a3-b6a0-3cb828949685","Type":"ContainerDied","Data":"9d84ea48e17d9953089806d6521e1a21779a7d31d86fd8fbfd9ba21984846609"} Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.919447 4958 scope.go:117] "RemoveContainer" containerID="0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.922220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerStarted","Data":"321f7570be8af4a0f3450c3585f6f936d73e0f3b6871ac8372aff4a00c88cbc0"} Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.946929 4958 scope.go:117] "RemoveContainer" containerID="65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.949094 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.956650 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.972882 4958 scope.go:117] "RemoveContainer" containerID="0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591" Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.973362 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591\": container with ID starting with 0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591 not found: ID does not exist" containerID="0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.973399 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591"} err="failed to get container status \"0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591\": rpc error: code = NotFound desc = could not find container \"0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591\": container with ID starting with 0175e0f5df6e3173a9e2581b6527c3f2210728e1c27c7a3329e4d45b01385591 not found: ID does not exist" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.973426 4958 scope.go:117] "RemoveContainer" containerID="65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9" Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.975517 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9\": container with ID starting with 65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9 not found: ID does not exist" containerID="65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.975546 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9"} err="failed to get container status \"65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9\": rpc error: code = NotFound desc = could not find container \"65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9\": container with ID starting with 65c34fa0196d8ec5f907b515ac308fba3df157f247014f3955b779f6903b4df9 not found: ID does not exist" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.976401 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.976735 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-api" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.976753 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-api" Oct 08 06:55:36 crc kubenswrapper[4958]: E1008 06:55:36.976794 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-log" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.976800 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-log" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.977012 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-api" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.977037 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" containerName="nova-api-log" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.977898 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.982782 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.983079 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.983179 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 06:55:36 crc kubenswrapper[4958]: I1008 06:55:36.996098 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.081349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqh64\" (UniqueName: \"kubernetes.io/projected/465f5235-1e60-4998-9703-c595a9c02af8-kube-api-access-jqh64\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.081399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.081441 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.081534 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465f5235-1e60-4998-9703-c595a9c02af8-logs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.081586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-config-data\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.081657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-public-tls-certs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.183539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqh64\" (UniqueName: \"kubernetes.io/projected/465f5235-1e60-4998-9703-c595a9c02af8-kube-api-access-jqh64\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.183607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.183673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.183698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465f5235-1e60-4998-9703-c595a9c02af8-logs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.183722 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-config-data\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.183784 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-public-tls-certs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.184270 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465f5235-1e60-4998-9703-c595a9c02af8-logs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.188535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.188743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.189397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-public-tls-certs\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.189491 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-config-data\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.199821 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqh64\" (UniqueName: \"kubernetes.io/projected/465f5235-1e60-4998-9703-c595a9c02af8-kube-api-access-jqh64\") pod \"nova-api-0\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.297748 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.588896 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6864d7-7709-42a3-b6a0-3cb828949685" path="/var/lib/kubelet/pods/5c6864d7-7709-42a3-b6a0-3cb828949685/volumes" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.589874 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d3300d-5acc-450e-8820-219933eeac53" path="/var/lib/kubelet/pods/c8d3300d-5acc-450e-8820-219933eeac53/volumes" Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.822052 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.937117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"465f5235-1e60-4998-9703-c595a9c02af8","Type":"ContainerStarted","Data":"7cae5d56d1146ffb60ab36ca590ed72cac32e33fc909a6940aaeeb73fff482c2"} Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.940725 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="5b130d69a8aa9d1feb209f9a1f16b0b50db741e0776429ac28dc669dd948e901" exitCode=0 Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.940781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"5b130d69a8aa9d1feb209f9a1f16b0b50db741e0776429ac28dc669dd948e901"} Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.940803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"51ab079a19b3cda7fa6717a3a6e40663eab6249d65864efbeac7f7030917e815"} Oct 08 06:55:37 crc kubenswrapper[4958]: I1008 06:55:37.940821 4958 scope.go:117] "RemoveContainer" containerID="536ded4ebcb3bc7d24c7fc08780da096046aca9fd4294e702291ed048ad523b0" Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.196705 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.218576 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.958127 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerStarted","Data":"68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2"} Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.958500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerStarted","Data":"2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a"} Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.960857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"465f5235-1e60-4998-9703-c595a9c02af8","Type":"ContainerStarted","Data":"e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3"} Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.960890 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"465f5235-1e60-4998-9703-c595a9c02af8","Type":"ContainerStarted","Data":"d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a"} Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.990036 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:55:38 crc kubenswrapper[4958]: I1008 06:55:38.999540 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.99951188 podStartE2EDuration="2.99951188s" podCreationTimestamp="2025-10-08 06:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:38.986040736 +0000 UTC m=+1282.115733337" watchObservedRunningTime="2025-10-08 06:55:38.99951188 +0000 UTC m=+1282.129204511" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.184995 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qg756"] Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.186218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.193145 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.193574 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.203352 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qg756"] Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.322258 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-config-data\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.322308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.322393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-scripts\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.322435 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mljm\" (UniqueName: \"kubernetes.io/projected/3fcbbb25-135f-4269-86a1-359e0b962438-kube-api-access-7mljm\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.424610 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-scripts\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.425008 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mljm\" (UniqueName: \"kubernetes.io/projected/3fcbbb25-135f-4269-86a1-359e0b962438-kube-api-access-7mljm\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.425196 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-config-data\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.425246 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.428769 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-scripts\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.431182 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-config-data\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.435508 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.441661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mljm\" (UniqueName: \"kubernetes.io/projected/3fcbbb25-135f-4269-86a1-359e0b962438-kube-api-access-7mljm\") pod \"nova-cell1-cell-mapping-qg756\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.509082 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.968425 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qg756"] Oct 08 06:55:39 crc kubenswrapper[4958]: I1008 06:55:39.976900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerStarted","Data":"0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060"} Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.300166 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.366668 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-bbptg"] Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.366934 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerName="dnsmasq-dns" containerID="cri-o://7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f" gracePeriod=10 Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.906499 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.986056 4958 generic.go:334] "Generic (PLEG): container finished" podID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerID="7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f" exitCode=0 Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.986121 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.986137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" event={"ID":"afd63ed7-27dd-4fea-aa37-c2d70cc4849e","Type":"ContainerDied","Data":"7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f"} Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.986163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffc974fdf-bbptg" event={"ID":"afd63ed7-27dd-4fea-aa37-c2d70cc4849e","Type":"ContainerDied","Data":"226402ea8cd755790f6bb7cd7e8ea5d2f61a3f930f91b1ec39b8810b4d7cfc62"} Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.986180 4958 scope.go:117] "RemoveContainer" containerID="7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f" Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.987843 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qg756" event={"ID":"3fcbbb25-135f-4269-86a1-359e0b962438","Type":"ContainerStarted","Data":"a1181e703f68a3ef6c82fdcf0b6021d7e2a37cef956a81ffaba24f68e68ca6fe"} Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.987893 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qg756" event={"ID":"3fcbbb25-135f-4269-86a1-359e0b962438","Type":"ContainerStarted","Data":"9db4cef1921de14663d594c760185719766a1c3e3d557c61a6f0b21e2adcf795"} Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.994608 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerStarted","Data":"ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e"} Oct 08 06:55:40 crc kubenswrapper[4958]: I1008 06:55:40.994748 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.001285 4958 scope.go:117] "RemoveContainer" containerID="7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.011647 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qg756" podStartSLOduration=2.011632463 podStartE2EDuration="2.011632463s" podCreationTimestamp="2025-10-08 06:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:41.011339515 +0000 UTC m=+1284.141032116" watchObservedRunningTime="2025-10-08 06:55:41.011632463 +0000 UTC m=+1284.141325064" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.022306 4958 scope.go:117] "RemoveContainer" containerID="7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f" Oct 08 06:55:41 crc kubenswrapper[4958]: E1008 06:55:41.022847 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f\": container with ID starting with 7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f not found: ID does not exist" containerID="7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.022891 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f"} err="failed to get container status \"7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f\": rpc error: code = NotFound desc = could not find container \"7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f\": container with ID starting with 7019069f54baad6d2bc4f6cf124813da1ad18985acbba8b30d6b4f4e7d3dfe6f not found: ID does not exist" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.022918 4958 scope.go:117] "RemoveContainer" containerID="7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595" Oct 08 06:55:41 crc kubenswrapper[4958]: E1008 06:55:41.023445 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595\": container with ID starting with 7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595 not found: ID does not exist" containerID="7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.023490 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595"} err="failed to get container status \"7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595\": rpc error: code = NotFound desc = could not find container \"7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595\": container with ID starting with 7bb300cc19e73864a49d420266af3cb7e773b94f300afa0e3dfd9d679c6d7595 not found: ID does not exist" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.031160 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.353616625 podStartE2EDuration="6.03114685s" podCreationTimestamp="2025-10-08 06:55:35 +0000 UTC" firstStartedPulling="2025-10-08 06:55:36.865158198 +0000 UTC m=+1279.994850809" lastFinishedPulling="2025-10-08 06:55:40.542688433 +0000 UTC m=+1283.672381034" observedRunningTime="2025-10-08 06:55:41.028515149 +0000 UTC m=+1284.158207750" watchObservedRunningTime="2025-10-08 06:55:41.03114685 +0000 UTC m=+1284.160839451" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.065927 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-nb\") pod \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.066215 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-swift-storage-0\") pod \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.066330 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2dp5\" (UniqueName: \"kubernetes.io/projected/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-kube-api-access-w2dp5\") pod \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.066475 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-config\") pod \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.066588 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-svc\") pod \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.066738 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-sb\") pod \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\" (UID: \"afd63ed7-27dd-4fea-aa37-c2d70cc4849e\") " Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.073106 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-kube-api-access-w2dp5" (OuterVolumeSpecName: "kube-api-access-w2dp5") pod "afd63ed7-27dd-4fea-aa37-c2d70cc4849e" (UID: "afd63ed7-27dd-4fea-aa37-c2d70cc4849e"). InnerVolumeSpecName "kube-api-access-w2dp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.111502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "afd63ed7-27dd-4fea-aa37-c2d70cc4849e" (UID: "afd63ed7-27dd-4fea-aa37-c2d70cc4849e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.115741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "afd63ed7-27dd-4fea-aa37-c2d70cc4849e" (UID: "afd63ed7-27dd-4fea-aa37-c2d70cc4849e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.116551 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afd63ed7-27dd-4fea-aa37-c2d70cc4849e" (UID: "afd63ed7-27dd-4fea-aa37-c2d70cc4849e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.118351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-config" (OuterVolumeSpecName: "config") pod "afd63ed7-27dd-4fea-aa37-c2d70cc4849e" (UID: "afd63ed7-27dd-4fea-aa37-c2d70cc4849e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.137591 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "afd63ed7-27dd-4fea-aa37-c2d70cc4849e" (UID: "afd63ed7-27dd-4fea-aa37-c2d70cc4849e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.169196 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.169229 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.169239 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.169249 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.169258 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.169269 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2dp5\" (UniqueName: \"kubernetes.io/projected/afd63ed7-27dd-4fea-aa37-c2d70cc4849e-kube-api-access-w2dp5\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.319006 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-bbptg"] Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.326551 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffc974fdf-bbptg"] Oct 08 06:55:41 crc kubenswrapper[4958]: I1008 06:55:41.589840 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" path="/var/lib/kubelet/pods/afd63ed7-27dd-4fea-aa37-c2d70cc4849e/volumes" Oct 08 06:55:45 crc kubenswrapper[4958]: I1008 06:55:45.046167 4958 generic.go:334] "Generic (PLEG): container finished" podID="3fcbbb25-135f-4269-86a1-359e0b962438" containerID="a1181e703f68a3ef6c82fdcf0b6021d7e2a37cef956a81ffaba24f68e68ca6fe" exitCode=0 Oct 08 06:55:45 crc kubenswrapper[4958]: I1008 06:55:45.046299 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qg756" event={"ID":"3fcbbb25-135f-4269-86a1-359e0b962438","Type":"ContainerDied","Data":"a1181e703f68a3ef6c82fdcf0b6021d7e2a37cef956a81ffaba24f68e68ca6fe"} Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.430134 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.573576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mljm\" (UniqueName: \"kubernetes.io/projected/3fcbbb25-135f-4269-86a1-359e0b962438-kube-api-access-7mljm\") pod \"3fcbbb25-135f-4269-86a1-359e0b962438\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.573715 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-config-data\") pod \"3fcbbb25-135f-4269-86a1-359e0b962438\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.573756 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-scripts\") pod \"3fcbbb25-135f-4269-86a1-359e0b962438\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.573838 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-combined-ca-bundle\") pod \"3fcbbb25-135f-4269-86a1-359e0b962438\" (UID: \"3fcbbb25-135f-4269-86a1-359e0b962438\") " Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.588409 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fcbbb25-135f-4269-86a1-359e0b962438-kube-api-access-7mljm" (OuterVolumeSpecName: "kube-api-access-7mljm") pod "3fcbbb25-135f-4269-86a1-359e0b962438" (UID: "3fcbbb25-135f-4269-86a1-359e0b962438"). InnerVolumeSpecName "kube-api-access-7mljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.588446 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-scripts" (OuterVolumeSpecName: "scripts") pod "3fcbbb25-135f-4269-86a1-359e0b962438" (UID: "3fcbbb25-135f-4269-86a1-359e0b962438"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.623161 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fcbbb25-135f-4269-86a1-359e0b962438" (UID: "3fcbbb25-135f-4269-86a1-359e0b962438"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.631427 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-config-data" (OuterVolumeSpecName: "config-data") pod "3fcbbb25-135f-4269-86a1-359e0b962438" (UID: "3fcbbb25-135f-4269-86a1-359e0b962438"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.676861 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.676913 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.676933 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fcbbb25-135f-4269-86a1-359e0b962438-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:46 crc kubenswrapper[4958]: I1008 06:55:46.676986 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mljm\" (UniqueName: \"kubernetes.io/projected/3fcbbb25-135f-4269-86a1-359e0b962438-kube-api-access-7mljm\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.074298 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qg756" event={"ID":"3fcbbb25-135f-4269-86a1-359e0b962438","Type":"ContainerDied","Data":"9db4cef1921de14663d594c760185719766a1c3e3d557c61a6f0b21e2adcf795"} Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.074342 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db4cef1921de14663d594c760185719766a1c3e3d557c61a6f0b21e2adcf795" Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.074759 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qg756" Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.282787 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.283593 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" containerName="nova-scheduler-scheduler" containerID="cri-o://8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" gracePeriod=30 Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.298211 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.298979 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.301210 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.309398 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.309663 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-log" containerID="cri-o://91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897" gracePeriod=30 Oct 08 06:55:47 crc kubenswrapper[4958]: I1008 06:55:47.310105 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-metadata" containerID="cri-o://94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366" gracePeriod=30 Oct 08 06:55:47 crc kubenswrapper[4958]: E1008 06:55:47.921138 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 06:55:47 crc kubenswrapper[4958]: E1008 06:55:47.923100 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 06:55:47 crc kubenswrapper[4958]: E1008 06:55:47.924587 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 06:55:47 crc kubenswrapper[4958]: E1008 06:55:47.924624 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" containerName="nova-scheduler-scheduler" Oct 08 06:55:48 crc kubenswrapper[4958]: I1008 06:55:48.087849 4958 generic.go:334] "Generic (PLEG): container finished" podID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerID="91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897" exitCode=143 Oct 08 06:55:48 crc kubenswrapper[4958]: I1008 06:55:48.087917 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6eec9f7a-946e-405d-85af-8db9b57626e4","Type":"ContainerDied","Data":"91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897"} Oct 08 06:55:48 crc kubenswrapper[4958]: I1008 06:55:48.312087 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:55:48 crc kubenswrapper[4958]: I1008 06:55:48.312106 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:55:49 crc kubenswrapper[4958]: I1008 06:55:49.099931 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-log" containerID="cri-o://d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a" gracePeriod=30 Oct 08 06:55:49 crc kubenswrapper[4958]: I1008 06:55:49.100529 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-api" containerID="cri-o://e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3" gracePeriod=30 Oct 08 06:55:50 crc kubenswrapper[4958]: I1008 06:55:50.109320 4958 generic.go:334] "Generic (PLEG): container finished" podID="465f5235-1e60-4998-9703-c595a9c02af8" containerID="d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a" exitCode=143 Oct 08 06:55:50 crc kubenswrapper[4958]: I1008 06:55:50.109357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"465f5235-1e60-4998-9703-c595a9c02af8","Type":"ContainerDied","Data":"d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a"} Oct 08 06:55:50 crc kubenswrapper[4958]: I1008 06:55:50.434725 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:50652->10.217.0.194:8775: read: connection reset by peer" Oct 08 06:55:50 crc kubenswrapper[4958]: I1008 06:55:50.435045 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:50640->10.217.0.194:8775: read: connection reset by peer" Oct 08 06:55:50 crc kubenswrapper[4958]: I1008 06:55:50.961876 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.088575 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-config-data\") pod \"6eec9f7a-946e-405d-85af-8db9b57626e4\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.088660 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eec9f7a-946e-405d-85af-8db9b57626e4-logs\") pod \"6eec9f7a-946e-405d-85af-8db9b57626e4\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.088766 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-combined-ca-bundle\") pod \"6eec9f7a-946e-405d-85af-8db9b57626e4\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.088789 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-nova-metadata-tls-certs\") pod \"6eec9f7a-946e-405d-85af-8db9b57626e4\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.088888 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42gcb\" (UniqueName: \"kubernetes.io/projected/6eec9f7a-946e-405d-85af-8db9b57626e4-kube-api-access-42gcb\") pod \"6eec9f7a-946e-405d-85af-8db9b57626e4\" (UID: \"6eec9f7a-946e-405d-85af-8db9b57626e4\") " Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.091101 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eec9f7a-946e-405d-85af-8db9b57626e4-logs" (OuterVolumeSpecName: "logs") pod "6eec9f7a-946e-405d-85af-8db9b57626e4" (UID: "6eec9f7a-946e-405d-85af-8db9b57626e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.100139 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eec9f7a-946e-405d-85af-8db9b57626e4-kube-api-access-42gcb" (OuterVolumeSpecName: "kube-api-access-42gcb") pod "6eec9f7a-946e-405d-85af-8db9b57626e4" (UID: "6eec9f7a-946e-405d-85af-8db9b57626e4"). InnerVolumeSpecName "kube-api-access-42gcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.128515 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eec9f7a-946e-405d-85af-8db9b57626e4" (UID: "6eec9f7a-946e-405d-85af-8db9b57626e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.157239 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-config-data" (OuterVolumeSpecName: "config-data") pod "6eec9f7a-946e-405d-85af-8db9b57626e4" (UID: "6eec9f7a-946e-405d-85af-8db9b57626e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.158661 4958 generic.go:334] "Generic (PLEG): container finished" podID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerID="94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366" exitCode=0 Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.158699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6eec9f7a-946e-405d-85af-8db9b57626e4","Type":"ContainerDied","Data":"94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366"} Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.158725 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6eec9f7a-946e-405d-85af-8db9b57626e4","Type":"ContainerDied","Data":"eb4159cf73dbefddf2a90eabddfae32170c10ffe92ff4905dccdd7e16c272ad9"} Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.158748 4958 scope.go:117] "RemoveContainer" containerID="94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.158896 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.172455 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6eec9f7a-946e-405d-85af-8db9b57626e4" (UID: "6eec9f7a-946e-405d-85af-8db9b57626e4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.190706 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42gcb\" (UniqueName: \"kubernetes.io/projected/6eec9f7a-946e-405d-85af-8db9b57626e4-kube-api-access-42gcb\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.190777 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.190791 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eec9f7a-946e-405d-85af-8db9b57626e4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.190801 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.190810 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec9f7a-946e-405d-85af-8db9b57626e4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.190817 4958 scope.go:117] "RemoveContainer" containerID="91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.207995 4958 scope.go:117] "RemoveContainer" containerID="94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366" Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.208271 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366\": container with ID starting with 94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366 not found: ID does not exist" containerID="94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.208301 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366"} err="failed to get container status \"94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366\": rpc error: code = NotFound desc = could not find container \"94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366\": container with ID starting with 94e4c3c16fd38d409470ba03e1ca0144de72eea08de26f34d78f718bdebe1366 not found: ID does not exist" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.208322 4958 scope.go:117] "RemoveContainer" containerID="91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897" Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.208584 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897\": container with ID starting with 91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897 not found: ID does not exist" containerID="91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.208605 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897"} err="failed to get container status \"91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897\": rpc error: code = NotFound desc = could not find container \"91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897\": container with ID starting with 91ee2ec7c90225037498d378a5d37763b77a5f53b393538fa9f1834f2c8aa897 not found: ID does not exist" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.596148 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.602961 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.633326 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.633919 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerName="dnsmasq-dns" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.633940 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerName="dnsmasq-dns" Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.634017 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fcbbb25-135f-4269-86a1-359e0b962438" containerName="nova-manage" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634030 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fcbbb25-135f-4269-86a1-359e0b962438" containerName="nova-manage" Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.634049 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-log" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634062 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-log" Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.634083 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerName="init" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634095 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerName="init" Oct 08 06:55:51 crc kubenswrapper[4958]: E1008 06:55:51.634108 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-metadata" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634120 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-metadata" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634426 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-log" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634454 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fcbbb25-135f-4269-86a1-359e0b962438" containerName="nova-manage" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634479 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd63ed7-27dd-4fea-aa37-c2d70cc4849e" containerName="dnsmasq-dns" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.634503 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" containerName="nova-metadata-metadata" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.636249 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.641033 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.641321 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.655209 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.704076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.704149 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fd75a5-0719-4f7b-9103-d76319815535-logs\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.704200 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxwx\" (UniqueName: \"kubernetes.io/projected/90fd75a5-0719-4f7b-9103-d76319815535-kube-api-access-mfxwx\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.704228 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-config-data\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.704273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.805654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.805724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fd75a5-0719-4f7b-9103-d76319815535-logs\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.805797 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxwx\" (UniqueName: \"kubernetes.io/projected/90fd75a5-0719-4f7b-9103-d76319815535-kube-api-access-mfxwx\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.805827 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-config-data\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.805883 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.806592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fd75a5-0719-4f7b-9103-d76319815535-logs\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.810631 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.811050 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-config-data\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.811826 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.825670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxwx\" (UniqueName: \"kubernetes.io/projected/90fd75a5-0719-4f7b-9103-d76319815535-kube-api-access-mfxwx\") pod \"nova-metadata-0\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " pod="openstack/nova-metadata-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.879067 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:51 crc kubenswrapper[4958]: I1008 06:55:51.955215 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.009173 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnlhv\" (UniqueName: \"kubernetes.io/projected/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-kube-api-access-pnlhv\") pod \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.009427 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-combined-ca-bundle\") pod \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.009608 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-config-data\") pod \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\" (UID: \"3fad8ac8-1c24-499a-bcfd-70a4b69f976c\") " Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.015200 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-kube-api-access-pnlhv" (OuterVolumeSpecName: "kube-api-access-pnlhv") pod "3fad8ac8-1c24-499a-bcfd-70a4b69f976c" (UID: "3fad8ac8-1c24-499a-bcfd-70a4b69f976c"). InnerVolumeSpecName "kube-api-access-pnlhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.038399 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fad8ac8-1c24-499a-bcfd-70a4b69f976c" (UID: "3fad8ac8-1c24-499a-bcfd-70a4b69f976c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.049337 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-config-data" (OuterVolumeSpecName: "config-data") pod "3fad8ac8-1c24-499a-bcfd-70a4b69f976c" (UID: "3fad8ac8-1c24-499a-bcfd-70a4b69f976c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.112446 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.112475 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.112488 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnlhv\" (UniqueName: \"kubernetes.io/projected/3fad8ac8-1c24-499a-bcfd-70a4b69f976c-kube-api-access-pnlhv\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.194327 4958 generic.go:334] "Generic (PLEG): container finished" podID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" exitCode=0 Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.194382 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3fad8ac8-1c24-499a-bcfd-70a4b69f976c","Type":"ContainerDied","Data":"8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8"} Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.194446 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3fad8ac8-1c24-499a-bcfd-70a4b69f976c","Type":"ContainerDied","Data":"e2417cba051747bf580f26402086c437b7f54fa5b221a93a4acf856eee771dc5"} Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.194480 4958 scope.go:117] "RemoveContainer" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.194686 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.236214 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.252733 4958 scope.go:117] "RemoveContainer" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" Oct 08 06:55:52 crc kubenswrapper[4958]: E1008 06:55:52.253427 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8\": container with ID starting with 8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8 not found: ID does not exist" containerID="8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.253490 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8"} err="failed to get container status \"8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8\": rpc error: code = NotFound desc = could not find container \"8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8\": container with ID starting with 8c977779cc91f8cc36f5b2c12465bb47c658bba3201905999e5a487d89e7fdf8 not found: ID does not exist" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.255534 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.266825 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:52 crc kubenswrapper[4958]: E1008 06:55:52.267743 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" containerName="nova-scheduler-scheduler" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.267763 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" containerName="nova-scheduler-scheduler" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.268172 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" containerName="nova-scheduler-scheduler" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.269132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.274427 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.277325 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.417670 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-config-data\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.417969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.418028 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wnk\" (UniqueName: \"kubernetes.io/projected/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-kube-api-access-84wnk\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.520186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-config-data\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.520231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.520290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84wnk\" (UniqueName: \"kubernetes.io/projected/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-kube-api-access-84wnk\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.532670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.533246 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-config-data\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.556700 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84wnk\" (UniqueName: \"kubernetes.io/projected/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-kube-api-access-84wnk\") pod \"nova-scheduler-0\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " pod="openstack/nova-scheduler-0" Oct 08 06:55:52 crc kubenswrapper[4958]: I1008 06:55:52.592492 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:55:53 crc kubenswrapper[4958]: I1008 06:55:53.024016 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:55:53 crc kubenswrapper[4958]: W1008 06:55:53.027330 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90fd75a5_0719_4f7b_9103_d76319815535.slice/crio-3a4580bbed7bf57f153774bedb6f6f8a2a408b99e2cfc35c99ee7f6314b6b499 WatchSource:0}: Error finding container 3a4580bbed7bf57f153774bedb6f6f8a2a408b99e2cfc35c99ee7f6314b6b499: Status 404 returned error can't find the container with id 3a4580bbed7bf57f153774bedb6f6f8a2a408b99e2cfc35c99ee7f6314b6b499 Oct 08 06:55:53 crc kubenswrapper[4958]: I1008 06:55:53.143704 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:55:53 crc kubenswrapper[4958]: W1008 06:55:53.148845 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc22a5d6b_9ca7_4f30_b997_e28ae554a8be.slice/crio-379c8c90f59282b0e00355d6ba42d54a503edf63493282e391eea4dfbf89fa32 WatchSource:0}: Error finding container 379c8c90f59282b0e00355d6ba42d54a503edf63493282e391eea4dfbf89fa32: Status 404 returned error can't find the container with id 379c8c90f59282b0e00355d6ba42d54a503edf63493282e391eea4dfbf89fa32 Oct 08 06:55:53 crc kubenswrapper[4958]: I1008 06:55:53.207727 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90fd75a5-0719-4f7b-9103-d76319815535","Type":"ContainerStarted","Data":"3a4580bbed7bf57f153774bedb6f6f8a2a408b99e2cfc35c99ee7f6314b6b499"} Oct 08 06:55:53 crc kubenswrapper[4958]: I1008 06:55:53.210155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22a5d6b-9ca7-4f30-b997-e28ae554a8be","Type":"ContainerStarted","Data":"379c8c90f59282b0e00355d6ba42d54a503edf63493282e391eea4dfbf89fa32"} Oct 08 06:55:53 crc kubenswrapper[4958]: I1008 06:55:53.588487 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fad8ac8-1c24-499a-bcfd-70a4b69f976c" path="/var/lib/kubelet/pods/3fad8ac8-1c24-499a-bcfd-70a4b69f976c/volumes" Oct 08 06:55:53 crc kubenswrapper[4958]: I1008 06:55:53.589217 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eec9f7a-946e-405d-85af-8db9b57626e4" path="/var/lib/kubelet/pods/6eec9f7a-946e-405d-85af-8db9b57626e4/volumes" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.079422 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.149823 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-combined-ca-bundle\") pod \"465f5235-1e60-4998-9703-c595a9c02af8\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.149874 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-internal-tls-certs\") pod \"465f5235-1e60-4998-9703-c595a9c02af8\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.149965 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqh64\" (UniqueName: \"kubernetes.io/projected/465f5235-1e60-4998-9703-c595a9c02af8-kube-api-access-jqh64\") pod \"465f5235-1e60-4998-9703-c595a9c02af8\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.150016 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465f5235-1e60-4998-9703-c595a9c02af8-logs\") pod \"465f5235-1e60-4998-9703-c595a9c02af8\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.150151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-config-data\") pod \"465f5235-1e60-4998-9703-c595a9c02af8\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.150179 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-public-tls-certs\") pod \"465f5235-1e60-4998-9703-c595a9c02af8\" (UID: \"465f5235-1e60-4998-9703-c595a9c02af8\") " Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.168168 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465f5235-1e60-4998-9703-c595a9c02af8-kube-api-access-jqh64" (OuterVolumeSpecName: "kube-api-access-jqh64") pod "465f5235-1e60-4998-9703-c595a9c02af8" (UID: "465f5235-1e60-4998-9703-c595a9c02af8"). InnerVolumeSpecName "kube-api-access-jqh64". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.168580 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465f5235-1e60-4998-9703-c595a9c02af8-logs" (OuterVolumeSpecName: "logs") pod "465f5235-1e60-4998-9703-c595a9c02af8" (UID: "465f5235-1e60-4998-9703-c595a9c02af8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.237193 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "465f5235-1e60-4998-9703-c595a9c02af8" (UID: "465f5235-1e60-4998-9703-c595a9c02af8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.249483 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "465f5235-1e60-4998-9703-c595a9c02af8" (UID: "465f5235-1e60-4998-9703-c595a9c02af8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.251965 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90fd75a5-0719-4f7b-9103-d76319815535","Type":"ContainerStarted","Data":"411c6d0db58b91660709b05a5f2cc376c27a480550a9418f37b91c7bf4306c89"} Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.252005 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90fd75a5-0719-4f7b-9103-d76319815535","Type":"ContainerStarted","Data":"0416399ef594017585c375dbf306aaebef9247c5c069a91faaad5a921511a654"} Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.252188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-config-data" (OuterVolumeSpecName: "config-data") pod "465f5235-1e60-4998-9703-c595a9c02af8" (UID: "465f5235-1e60-4998-9703-c595a9c02af8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.253450 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.253469 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.253486 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.253496 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqh64\" (UniqueName: \"kubernetes.io/projected/465f5235-1e60-4998-9703-c595a9c02af8-kube-api-access-jqh64\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.253506 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465f5235-1e60-4998-9703-c595a9c02af8-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.283299 4958 generic.go:334] "Generic (PLEG): container finished" podID="465f5235-1e60-4998-9703-c595a9c02af8" containerID="e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3" exitCode=0 Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.283407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"465f5235-1e60-4998-9703-c595a9c02af8","Type":"ContainerDied","Data":"e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3"} Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.283437 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"465f5235-1e60-4998-9703-c595a9c02af8","Type":"ContainerDied","Data":"7cae5d56d1146ffb60ab36ca590ed72cac32e33fc909a6940aaeeb73fff482c2"} Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.283455 4958 scope.go:117] "RemoveContainer" containerID="e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.283635 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.291572 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "465f5235-1e60-4998-9703-c595a9c02af8" (UID: "465f5235-1e60-4998-9703-c595a9c02af8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.304541 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22a5d6b-9ca7-4f30-b997-e28ae554a8be","Type":"ContainerStarted","Data":"749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a"} Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.307754 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.307734893 podStartE2EDuration="3.307734893s" podCreationTimestamp="2025-10-08 06:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:54.284539057 +0000 UTC m=+1297.414231658" watchObservedRunningTime="2025-10-08 06:55:54.307734893 +0000 UTC m=+1297.437427484" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.327216 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.327198338 podStartE2EDuration="2.327198338s" podCreationTimestamp="2025-10-08 06:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:54.319801089 +0000 UTC m=+1297.449493690" watchObservedRunningTime="2025-10-08 06:55:54.327198338 +0000 UTC m=+1297.456890939" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.330308 4958 scope.go:117] "RemoveContainer" containerID="d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.345328 4958 scope.go:117] "RemoveContainer" containerID="e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3" Oct 08 06:55:54 crc kubenswrapper[4958]: E1008 06:55:54.345702 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3\": container with ID starting with e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3 not found: ID does not exist" containerID="e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.345727 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3"} err="failed to get container status \"e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3\": rpc error: code = NotFound desc = could not find container \"e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3\": container with ID starting with e7132bec9722e411e6f0be9e0076167a98e44a344f70071ae63577b95a30e4f3 not found: ID does not exist" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.345746 4958 scope.go:117] "RemoveContainer" containerID="d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a" Oct 08 06:55:54 crc kubenswrapper[4958]: E1008 06:55:54.345975 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a\": container with ID starting with d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a not found: ID does not exist" containerID="d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.345997 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a"} err="failed to get container status \"d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a\": rpc error: code = NotFound desc = could not find container \"d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a\": container with ID starting with d2ce526fbd9afa15fc46b3d680d00426129d8118101945212dd591030ca77e3a not found: ID does not exist" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.355171 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/465f5235-1e60-4998-9703-c595a9c02af8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.632258 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.640531 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.678850 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:54 crc kubenswrapper[4958]: E1008 06:55:54.679499 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-log" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.679531 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-log" Oct 08 06:55:54 crc kubenswrapper[4958]: E1008 06:55:54.679586 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-api" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.679598 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-api" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.679944 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-log" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.680017 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="465f5235-1e60-4998-9703-c595a9c02af8" containerName="nova-api-api" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.681695 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.687447 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.687762 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.687990 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.694709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.765410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.765662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-public-tls-certs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.765877 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-config-data\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.765920 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65112aa3-67bb-47a4-bc56-241ce61eff7b-logs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.766035 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.766205 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bt5d\" (UniqueName: \"kubernetes.io/projected/65112aa3-67bb-47a4-bc56-241ce61eff7b-kube-api-access-6bt5d\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.868421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bt5d\" (UniqueName: \"kubernetes.io/projected/65112aa3-67bb-47a4-bc56-241ce61eff7b-kube-api-access-6bt5d\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.868740 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.868807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-public-tls-certs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.868872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-config-data\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.868893 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65112aa3-67bb-47a4-bc56-241ce61eff7b-logs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.868927 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.870543 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65112aa3-67bb-47a4-bc56-241ce61eff7b-logs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.874925 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-config-data\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.875936 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-public-tls-certs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.876333 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.884791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:54 crc kubenswrapper[4958]: I1008 06:55:54.899828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bt5d\" (UniqueName: \"kubernetes.io/projected/65112aa3-67bb-47a4-bc56-241ce61eff7b-kube-api-access-6bt5d\") pod \"nova-api-0\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " pod="openstack/nova-api-0" Oct 08 06:55:55 crc kubenswrapper[4958]: I1008 06:55:55.007523 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:55:55 crc kubenswrapper[4958]: I1008 06:55:55.543411 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:55:55 crc kubenswrapper[4958]: W1008 06:55:55.558355 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65112aa3_67bb_47a4_bc56_241ce61eff7b.slice/crio-c32f55c6ea04dd844cd87c1704e08913cb980fd50824fc16a2fe3e958f374b16 WatchSource:0}: Error finding container c32f55c6ea04dd844cd87c1704e08913cb980fd50824fc16a2fe3e958f374b16: Status 404 returned error can't find the container with id c32f55c6ea04dd844cd87c1704e08913cb980fd50824fc16a2fe3e958f374b16 Oct 08 06:55:55 crc kubenswrapper[4958]: I1008 06:55:55.597705 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465f5235-1e60-4998-9703-c595a9c02af8" path="/var/lib/kubelet/pods/465f5235-1e60-4998-9703-c595a9c02af8/volumes" Oct 08 06:55:56 crc kubenswrapper[4958]: I1008 06:55:56.334206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65112aa3-67bb-47a4-bc56-241ce61eff7b","Type":"ContainerStarted","Data":"f3502b89ff3b2147ee78b29363381741d58de95e5a89ce1f3ff1cff5bce4545a"} Oct 08 06:55:56 crc kubenswrapper[4958]: I1008 06:55:56.335115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65112aa3-67bb-47a4-bc56-241ce61eff7b","Type":"ContainerStarted","Data":"3847c1dff68faa7fdc2dc75dc40fe444ac629a5d89701750bf8d511d58fa9aba"} Oct 08 06:55:56 crc kubenswrapper[4958]: I1008 06:55:56.335147 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65112aa3-67bb-47a4-bc56-241ce61eff7b","Type":"ContainerStarted","Data":"c32f55c6ea04dd844cd87c1704e08913cb980fd50824fc16a2fe3e958f374b16"} Oct 08 06:55:56 crc kubenswrapper[4958]: I1008 06:55:56.366386 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.366359712 podStartE2EDuration="2.366359712s" podCreationTimestamp="2025-10-08 06:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:55:56.360575625 +0000 UTC m=+1299.490268296" watchObservedRunningTime="2025-10-08 06:55:56.366359712 +0000 UTC m=+1299.496052343" Oct 08 06:55:56 crc kubenswrapper[4958]: I1008 06:55:56.956572 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 06:55:56 crc kubenswrapper[4958]: I1008 06:55:56.958050 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 06:55:57 crc kubenswrapper[4958]: I1008 06:55:57.595827 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 06:56:01 crc kubenswrapper[4958]: I1008 06:56:01.956131 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 06:56:01 crc kubenswrapper[4958]: I1008 06:56:01.957425 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 06:56:02 crc kubenswrapper[4958]: I1008 06:56:02.593566 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 06:56:02 crc kubenswrapper[4958]: I1008 06:56:02.642193 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 06:56:03 crc kubenswrapper[4958]: I1008 06:56:03.007318 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:03 crc kubenswrapper[4958]: I1008 06:56:03.007335 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:03 crc kubenswrapper[4958]: I1008 06:56:03.473474 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 06:56:05 crc kubenswrapper[4958]: I1008 06:56:05.007801 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:56:05 crc kubenswrapper[4958]: I1008 06:56:05.008339 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 06:56:06 crc kubenswrapper[4958]: I1008 06:56:06.024139 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:06 crc kubenswrapper[4958]: I1008 06:56:06.024140 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:06 crc kubenswrapper[4958]: I1008 06:56:06.396087 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 06:56:11 crc kubenswrapper[4958]: I1008 06:56:11.963332 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 06:56:11 crc kubenswrapper[4958]: I1008 06:56:11.968193 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 06:56:11 crc kubenswrapper[4958]: I1008 06:56:11.973706 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 06:56:12 crc kubenswrapper[4958]: I1008 06:56:12.546126 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 06:56:15 crc kubenswrapper[4958]: I1008 06:56:15.017144 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 06:56:15 crc kubenswrapper[4958]: I1008 06:56:15.018051 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 06:56:15 crc kubenswrapper[4958]: I1008 06:56:15.021982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 06:56:15 crc kubenswrapper[4958]: I1008 06:56:15.032087 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 06:56:15 crc kubenswrapper[4958]: I1008 06:56:15.571698 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 06:56:15 crc kubenswrapper[4958]: I1008 06:56:15.597451 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.204008 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.204749 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="4b9c2255-61f9-4319-93b1-138600df6985" containerName="openstackclient" containerID="cri-o://bcbeedc0e126fa86f86f6a65d8d01563364649a3df6ebc1fda56bb28dd49d272" gracePeriod=2 Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.219866 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.248319 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-d4d8d7b54-5xwrj"] Oct 08 06:56:37 crc kubenswrapper[4958]: E1008 06:56:37.248689 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9c2255-61f9-4319-93b1-138600df6985" containerName="openstackclient" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.248705 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9c2255-61f9-4319-93b1-138600df6985" containerName="openstackclient" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.248897 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9c2255-61f9-4319-93b1-138600df6985" containerName="openstackclient" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.250013 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.274629 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6696f545c5-2j7vj"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.276270 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.296482 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-d4d8d7b54-5xwrj"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.324035 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dbff5b58b-dsj98"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.325537 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.327875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.328076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-internal-tls-certs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.328267 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-public-tls-certs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.328352 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jgz8\" (UniqueName: \"kubernetes.io/projected/1792ed77-ae79-44a5-80a3-7a67a8031d75-kube-api-access-6jgz8\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.328370 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data-custom\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.328384 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1792ed77-ae79-44a5-80a3-7a67a8031d75-logs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.328478 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-combined-ca-bundle\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.362671 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6696f545c5-2j7vj"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.398187 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432029 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-logs\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432080 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data-custom\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432116 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432144 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl8gs\" (UniqueName: \"kubernetes.io/projected/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-kube-api-access-cl8gs\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432164 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-public-tls-certs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432179 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c32c16-960f-4a65-abd5-f435d16932f0-logs\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432204 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8sv\" (UniqueName: \"kubernetes.io/projected/b6c32c16-960f-4a65-abd5-f435d16932f0-kube-api-access-gs8sv\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432226 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data-custom\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432248 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jgz8\" (UniqueName: \"kubernetes.io/projected/1792ed77-ae79-44a5-80a3-7a67a8031d75-kube-api-access-6jgz8\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432283 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data-custom\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1792ed77-ae79-44a5-80a3-7a67a8031d75-logs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432341 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-combined-ca-bundle\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-combined-ca-bundle\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432435 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-combined-ca-bundle\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.432458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-internal-tls-certs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.437179 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1792ed77-ae79-44a5-80a3-7a67a8031d75-logs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.446032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-combined-ca-bundle\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.452437 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-internal-tls-certs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.455075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data-custom\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.468708 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dbff5b58b-dsj98"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.483118 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.483469 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-public-tls-certs\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.486158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jgz8\" (UniqueName: \"kubernetes.io/projected/1792ed77-ae79-44a5-80a3-7a67a8031d75-kube-api-access-6jgz8\") pod \"barbican-api-6dbff5b58b-dsj98\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.535701 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cindere33b-account-delete-6brfz"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.537232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-combined-ca-bundle\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.537351 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-logs\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.537399 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data-custom\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-combined-ca-bundle\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.537444 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540478 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl8gs\" (UniqueName: \"kubernetes.io/projected/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-kube-api-access-cl8gs\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540562 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c32c16-960f-4a65-abd5-f435d16932f0-logs\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs8sv\" (UniqueName: \"kubernetes.io/projected/b6c32c16-960f-4a65-abd5-f435d16932f0-kube-api-access-gs8sv\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540659 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data-custom\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.540811 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-combined-ca-bundle\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.543695 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c32c16-960f-4a65-abd5-f435d16932f0-logs\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.545054 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-logs\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.545477 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:37 crc kubenswrapper[4958]: E1008 06:56:37.545819 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 06:56:37 crc kubenswrapper[4958]: E1008 06:56:37.545866 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data podName:8f931d71-9f8f-4755-a793-ca326e423199 nodeName:}" failed. No retries permitted until 2025-10-08 06:56:38.045851527 +0000 UTC m=+1341.175544128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data") pod "rabbitmq-server-0" (UID: "8f931d71-9f8f-4755-a793-ca326e423199") : configmap "rabbitmq-config-data" not found Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.548612 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.563432 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data-custom\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.565498 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cindere33b-account-delete-6brfz"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.570138 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs8sv\" (UniqueName: \"kubernetes.io/projected/b6c32c16-960f-4a65-abd5-f435d16932f0-kube-api-access-gs8sv\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.573548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl8gs\" (UniqueName: \"kubernetes.io/projected/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-kube-api-access-cl8gs\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.574587 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-p4rkp"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.584685 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-combined-ca-bundle\") pod \"barbican-worker-6696f545c5-2j7vj\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.587673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data-custom\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.617190 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.620026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data\") pod \"barbican-keystone-listener-d4d8d7b54-5xwrj\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.637319 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-tvw22"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.637354 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-spvp9"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.637632 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-tvw22" podUID="5971bc9c-45ee-4ccb-aef5-290f51ac13ba" containerName="openstack-network-exporter" containerID="cri-o://f61ab36dcee33b75ccc3971394dfde5a04ad577077ba1fea21e678472718c630" gracePeriod=30 Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.677821 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgk4\" (UniqueName: \"kubernetes.io/projected/50505fb3-8aa5-43de-a8f1-617501e46822-kube-api-access-2jgk4\") pod \"cindere33b-account-delete-6brfz\" (UID: \"50505fb3-8aa5-43de-a8f1-617501e46822\") " pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.685097 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementcbd1-account-delete-sbqzp"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.686436 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.688274 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.712845 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementcbd1-account-delete-sbqzp"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.786066 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgk4\" (UniqueName: \"kubernetes.io/projected/50505fb3-8aa5-43de-a8f1-617501e46822-kube-api-access-2jgk4\") pod \"cindere33b-account-delete-6brfz\" (UID: \"50505fb3-8aa5-43de-a8f1-617501e46822\") " pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.814159 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgk4\" (UniqueName: \"kubernetes.io/projected/50505fb3-8aa5-43de-a8f1-617501e46822-kube-api-access-2jgk4\") pod \"cindere33b-account-delete-6brfz\" (UID: \"50505fb3-8aa5-43de-a8f1-617501e46822\") " pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.823080 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rd4bk"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.831565 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rd4bk"] Oct 08 06:56:37 crc kubenswrapper[4958]: E1008 06:56:37.843696 4958 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-spvp9" message=< Oct 08 06:56:37 crc kubenswrapper[4958]: Exiting ovn-controller (1) [ OK ] Oct 08 06:56:37 crc kubenswrapper[4958]: > Oct 08 06:56:37 crc kubenswrapper[4958]: E1008 06:56:37.843733 4958 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-spvp9" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerName="ovn-controller" containerID="cri-o://b2fa4137f0b797f8d0358b35176b6e4fe70f8f968a53ff5dc07e48e43e721d8a" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.843764 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-spvp9" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerName="ovn-controller" containerID="cri-o://b2fa4137f0b797f8d0358b35176b6e4fe70f8f968a53ff5dc07e48e43e721d8a" gracePeriod=30 Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.846644 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.849914 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.850531 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="ovn-northd" containerID="cri-o://1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" gracePeriod=30 Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.850792 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="openstack-network-exporter" containerID="cri-o://758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538" gracePeriod=30 Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.866619 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tvw22_5971bc9c-45ee-4ccb-aef5-290f51ac13ba/openstack-network-exporter/0.log" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.867441 4958 generic.go:334] "Generic (PLEG): container finished" podID="5971bc9c-45ee-4ccb-aef5-290f51ac13ba" containerID="f61ab36dcee33b75ccc3971394dfde5a04ad577077ba1fea21e678472718c630" exitCode=2 Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.867589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tvw22" event={"ID":"5971bc9c-45ee-4ccb-aef5-290f51ac13ba","Type":"ContainerDied","Data":"f61ab36dcee33b75ccc3971394dfde5a04ad577077ba1fea21e678472718c630"} Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.868635 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:56:37 crc kubenswrapper[4958]: I1008 06:56:37.895157 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfqz\" (UniqueName: \"kubernetes.io/projected/fab49352-e790-47df-a1c8-f1b74e2a0134-kube-api-access-llfqz\") pod \"placementcbd1-account-delete-sbqzp\" (UID: \"fab49352-e790-47df-a1c8-f1b74e2a0134\") " pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.002208 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llfqz\" (UniqueName: \"kubernetes.io/projected/fab49352-e790-47df-a1c8-f1b74e2a0134-kube-api-access-llfqz\") pod \"placementcbd1-account-delete-sbqzp\" (UID: \"fab49352-e790-47df-a1c8-f1b74e2a0134\") " pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.014030 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican0295-account-delete-dzz74"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.016309 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.067438 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llfqz\" (UniqueName: \"kubernetes.io/projected/fab49352-e790-47df-a1c8-f1b74e2a0134-kube-api-access-llfqz\") pod \"placementcbd1-account-delete-sbqzp\" (UID: \"fab49352-e790-47df-a1c8-f1b74e2a0134\") " pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:38 crc kubenswrapper[4958]: E1008 06:56:38.122226 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 06:56:38 crc kubenswrapper[4958]: E1008 06:56:38.122298 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data podName:8f931d71-9f8f-4755-a793-ca326e423199 nodeName:}" failed. No retries permitted until 2025-10-08 06:56:39.122282127 +0000 UTC m=+1342.251974718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data") pod "rabbitmq-server-0" (UID: "8f931d71-9f8f-4755-a793-ca326e423199") : configmap "rabbitmq-config-data" not found Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.129438 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican0295-account-delete-dzz74"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.209778 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-plzfw"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.253661 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.265139 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pjd\" (UniqueName: \"kubernetes.io/projected/b562d3e4-572e-48d5-9257-4927ef68e988-kube-api-access-h8pjd\") pod \"barbican0295-account-delete-dzz74\" (UID: \"b562d3e4-572e-48d5-9257-4927ef68e988\") " pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.376690 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-plzfw"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.378333 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8pjd\" (UniqueName: \"kubernetes.io/projected/b562d3e4-572e-48d5-9257-4927ef68e988-kube-api-access-h8pjd\") pod \"barbican0295-account-delete-dzz74\" (UID: \"b562d3e4-572e-48d5-9257-4927ef68e988\") " pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.413662 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.414197 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="openstack-network-exporter" containerID="cri-o://1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693" gracePeriod=300 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.425467 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8pjd\" (UniqueName: \"kubernetes.io/projected/b562d3e4-572e-48d5-9257-4927ef68e988-kube-api-access-h8pjd\") pod \"barbican0295-account-delete-dzz74\" (UID: \"b562d3e4-572e-48d5-9257-4927ef68e988\") " pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.435625 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.437017 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.467142 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-5mtl4"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.467386 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerName="dnsmasq-dns" containerID="cri-o://58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905" gracePeriod=10 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.521969 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapic226-account-delete-g5pmv"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.526756 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.554291 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapic226-account-delete-g5pmv"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.580002 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-pjrq5"] Oct 08 06:56:38 crc kubenswrapper[4958]: E1008 06:56:38.585276 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:38 crc kubenswrapper[4958]: E1008 06:56:38.585343 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data podName:442b1534-27bc-4d6d-be46-1ea5689c290f nodeName:}" failed. No retries permitted until 2025-10-08 06:56:39.085323379 +0000 UTC m=+1342.215015980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data") pod "rabbitmq-cell1-server-0" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f") : configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.604494 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-pjrq5"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.650114 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.650707 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="openstack-network-exporter" containerID="cri-o://81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd" gracePeriod=300 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.669007 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell1fc4a-account-delete-nlcgq"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.670218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.695507 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="ovsdbserver-sb" containerID="cri-o://50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9" gracePeriod=300 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.697917 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmnc\" (UniqueName: \"kubernetes.io/projected/c32b7167-96c2-4cf2-b330-54562c181940-kube-api-access-pcmnc\") pod \"novaapic226-account-delete-g5pmv\" (UID: \"c32b7167-96c2-4cf2-b330-54562c181940\") " pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.734637 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell1fc4a-account-delete-nlcgq"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.776801 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mcqsd"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.780406 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="ovsdbserver-nb" containerID="cri-o://fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" gracePeriod=300 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.788184 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mcqsd"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.805223 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.805635 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="cinder-scheduler" containerID="cri-o://c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.806132 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="probe" containerID="cri-o://d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.807431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dk22\" (UniqueName: \"kubernetes.io/projected/655026ad-69aa-4867-8fc8-165d6e801ad0-kube-api-access-6dk22\") pod \"novacell1fc4a-account-delete-nlcgq\" (UID: \"655026ad-69aa-4867-8fc8-165d6e801ad0\") " pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.807614 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmnc\" (UniqueName: \"kubernetes.io/projected/c32b7167-96c2-4cf2-b330-54562c181940-kube-api-access-pcmnc\") pod \"novaapic226-account-delete-g5pmv\" (UID: \"c32b7167-96c2-4cf2-b330-54562c181940\") " pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.819226 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h7mqh"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.827573 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h7mqh"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.843029 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.843299 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api-log" containerID="cri-o://1ad480ebf1447ef4dce8f921ab07bf404169d88c3b6f87c0b44604976474fd46" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.843881 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api" containerID="cri-o://ab70053ebb2c4509de5852c1600f0da9043c9b356dde235137ef97c1e51aebf6" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.871293 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-687454697b-jdsn4"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.871600 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-687454697b-jdsn4" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-log" containerID="cri-o://d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.872018 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-687454697b-jdsn4" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-api" containerID="cri-o://5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.886167 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmnc\" (UniqueName: \"kubernetes.io/projected/c32b7167-96c2-4cf2-b330-54562c181940-kube-api-access-pcmnc\") pod \"novaapic226-account-delete-g5pmv\" (UID: \"c32b7167-96c2-4cf2-b330-54562c181940\") " pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.886526 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887004 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-server" containerID="cri-o://240aa008ea9290a60c7264968b55d9ba55c52133e52044e5dfbe81d67bf1b1b8" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887127 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-updater" containerID="cri-o://ef947aa4d03942a34c6bc85ef6fb4b0f2207baa004e3f6b80e29968a08b1d8cb" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887146 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-server" containerID="cri-o://724ed394bbd6e962a7c26ddc942e11038dcd01c833577f0f40fabd8ae1c82655" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887173 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-auditor" containerID="cri-o://f6755a074a4c40620709a87ac6599a018ae64733e179e086ef57f6d9ce9dc4d0" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887169 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-expirer" containerID="cri-o://822bb196cd354e0752c30e33f034860b8aa7c4cf0eef390c7e6067b9260d966e" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887210 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-replicator" containerID="cri-o://66d5f937dbd1abd5a08b1b2ddf3cc40f3eb8fc2793d2eff9531136500ae61e84" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887228 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-updater" containerID="cri-o://21df3155fa52a2087045a5927cde073b2ca8d2f30fa88ba4c74a281b2f3fb7bc" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887245 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-server" containerID="cri-o://0a5a60f42124a354930e3b3306d5ebb23d9d56694f6a482b36b814b7a29405e6" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887263 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-auditor" containerID="cri-o://b608a4dbe063d74c26342e7c42bce09e30df95c744884718ee615000db2bfb36" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887277 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-reaper" containerID="cri-o://c13727bb3092a9324ff420e4b08230ab8156bbd4c580c0f6905a950356a2b45b" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887299 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-replicator" containerID="cri-o://86bfed71cefd8eec097df2c1d8e6ba77ccea56ba62521cec70c88418d4a40639" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887309 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-auditor" containerID="cri-o://f0de83255ba3108743dca08b70ec1668b51eb795dc13a54cf5cbf4041775bd0b" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887366 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-replicator" containerID="cri-o://4afa360de4cdd22485317e27cb843f86474d888319ad6c43a75e14506d8d3331" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887389 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="swift-recon-cron" containerID="cri-o://1fcb4d69732f8668a00f6659ff61c3c3e89fb140ae20ef55148d18dda7b7d854" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.887427 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="rsync" containerID="cri-o://96600bb67add0f430f3d3f7bc50bb148f53a95e2d96aa280884db2079910fcc5" gracePeriod=30 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.899610 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-42fpn"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.910000 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-42fpn"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.911110 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dk22\" (UniqueName: \"kubernetes.io/projected/655026ad-69aa-4867-8fc8-165d6e801ad0-kube-api-access-6dk22\") pod \"novacell1fc4a-account-delete-nlcgq\" (UID: \"655026ad-69aa-4867-8fc8-165d6e801ad0\") " pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.916655 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-th7cg"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.932294 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-th7cg"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.950350 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.967501 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qg756"] Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.967823 4958 generic.go:334] "Generic (PLEG): container finished" podID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerID="1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693" exitCode=2 Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.967867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f285e309-c3e6-42ce-9f95-8302079cfd71","Type":"ContainerDied","Data":"1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693"} Oct 08 06:56:38 crc kubenswrapper[4958]: I1008 06:56:38.969458 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dk22\" (UniqueName: \"kubernetes.io/projected/655026ad-69aa-4867-8fc8-165d6e801ad0-kube-api-access-6dk22\") pod \"novacell1fc4a-account-delete-nlcgq\" (UID: \"655026ad-69aa-4867-8fc8-165d6e801ad0\") " pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.011282 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qg756"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.018706 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.022601 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tvw22_5971bc9c-45ee-4ccb-aef5-290f51ac13ba/openstack-network-exporter/0.log" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.022684 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tvw22" event={"ID":"5971bc9c-45ee-4ccb-aef5-290f51ac13ba","Type":"ContainerDied","Data":"fa9209f4d60550647f36c991017dd4905424536502aed8900172c404fe779aa3"} Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.022709 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa9209f4d60550647f36c991017dd4905424536502aed8900172c404fe779aa3" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.054822 4958 generic.go:334] "Generic (PLEG): container finished" podID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerID="b2fa4137f0b797f8d0358b35176b6e4fe70f8f968a53ff5dc07e48e43e721d8a" exitCode=0 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.055158 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9" event={"ID":"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7","Type":"ContainerDied","Data":"b2fa4137f0b797f8d0358b35176b6e4fe70f8f968a53ff5dc07e48e43e721d8a"} Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.060350 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tvw22_5971bc9c-45ee-4ccb-aef5-290f51ac13ba/openstack-network-exporter/0.log" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.060427 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.080731 4958 generic.go:334] "Generic (PLEG): container finished" podID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerID="58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905" exitCode=0 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.080886 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" event={"ID":"b3e3061c-ca6e-43c6-ba1d-2520f28142c6","Type":"ContainerDied","Data":"58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905"} Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.111782 4958 generic.go:334] "Generic (PLEG): container finished" podID="fe1279cb-5369-4347-9fc9-d598103536a9" containerID="758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538" exitCode=2 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.111893 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe1279cb-5369-4347-9fc9-d598103536a9","Type":"ContainerDied","Data":"758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538"} Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.112243 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.118012 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6269a952-e10d-442f-8d9f-135e16244e83/ovsdbserver-nb/0.log" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.118645 4958 generic.go:334] "Generic (PLEG): container finished" podID="6269a952-e10d-442f-8d9f-135e16244e83" containerID="81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd" exitCode=2 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.118704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6269a952-e10d-442f-8d9f-135e16244e83","Type":"ContainerDied","Data":"81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd"} Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.119394 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.119722 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovn-rundir\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.121468 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.122136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovs-rundir\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.122203 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x7qp\" (UniqueName: \"kubernetes.io/projected/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-kube-api-access-2x7qp\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.123106 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-config\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.123157 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-metrics-certs-tls-certs\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.126610 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-config" (OuterVolumeSpecName: "config") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.126870 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.131937 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.132047 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data podName:8f931d71-9f8f-4755-a793-ca326e423199 nodeName:}" failed. No retries permitted until 2025-10-08 06:56:41.132004758 +0000 UTC m=+1344.261697359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data") pod "rabbitmq-server-0" (UID: "8f931d71-9f8f-4755-a793-ca326e423199") : configmap "rabbitmq-config-data" not found Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.139283 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.139306 4958 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-ovs-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.139315 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.139388 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.139434 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data podName:442b1534-27bc-4d6d-be46-1ea5689c290f nodeName:}" failed. No retries permitted until 2025-10-08 06:56:40.139415268 +0000 UTC m=+1343.269107869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data") pod "rabbitmq-cell1-server-0" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f") : configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.148092 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6269a952_e10d_442f_8d9f_135e16244e83.slice/crio-81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e3061c_ca6e_43c6_ba1d_2520f28142c6.slice/crio-conmon-58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6269a952_e10d_442f_8d9f_135e16244e83.slice/crio-conmon-fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf285e309_c3e6_42ce_9f95_8302079cfd71.slice/crio-conmon-1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6269a952_e10d_442f_8d9f_135e16244e83.slice/crio-conmon-81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e3061c_ca6e_43c6_ba1d_2520f28142c6.slice/crio-58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c45aa0e_9caf_42e6_bfbb_59c802d81c98.slice/crio-86bfed71cefd8eec097df2c1d8e6ba77ccea56ba62521cec70c88418d4a40639.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf285e309_c3e6_42ce_9f95_8302079cfd71.slice/crio-1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c45aa0e_9caf_42e6_bfbb_59c802d81c98.slice/crio-conmon-c13727bb3092a9324ff420e4b08230ab8156bbd4c580c0f6905a950356a2b45b.scope\": RecentStats: unable to find data in memory cache]" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.148272 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-kube-api-access-2x7qp" (OuterVolumeSpecName: "kube-api-access-2x7qp") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "kube-api-access-2x7qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.182363 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-k6wwq"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.199616 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dbff5b58b-dsj98"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.243822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.246031 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6rxz\" (UniqueName: \"kubernetes.io/projected/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-kube-api-access-v6rxz\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.247190 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run" (OuterVolumeSpecName: "var-run") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.274387 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285042 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-k6wwq"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285414 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-combined-ca-bundle\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285504 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-log-ovn\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285573 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle\") pod \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\" (UID: \"5971bc9c-45ee-4ccb-aef5-290f51ac13ba\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285692 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-scripts\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run-ovn\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.285843 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-ovn-controller-tls-certs\") pod \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\" (UID: \"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7\") " Oct 08 06:56:39 crc kubenswrapper[4958]: W1008 06:56:39.286592 4958 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5971bc9c-45ee-4ccb-aef5-290f51ac13ba/volumes/kubernetes.io~secret/combined-ca-bundle Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.286616 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.287384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.289748 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.289782 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-scripts" (OuterVolumeSpecName: "scripts") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.293896 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-kube-api-access-v6rxz" (OuterVolumeSpecName: "kube-api-access-v6rxz") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "kube-api-access-v6rxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304227 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304260 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6rxz\" (UniqueName: \"kubernetes.io/projected/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-kube-api-access-v6rxz\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304271 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304280 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304289 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304297 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.304305 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x7qp\" (UniqueName: \"kubernetes.io/projected/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-kube-api-access-2x7qp\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.332848 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "5971bc9c-45ee-4ccb-aef5-290f51ac13ba" (UID: "5971bc9c-45ee-4ccb-aef5-290f51ac13ba"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.353812 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cindere33b-account-delete-6brfz"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.379510 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-z2vpl"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.395038 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e33b-account-create-mzg84"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.405974 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971bc9c-45ee-4ccb-aef5-290f51ac13ba-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.413145 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-z2vpl"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.414415 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.437301 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e33b-account-create-mzg84"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.450231 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" containerID="cri-o://b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" gracePeriod=29 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.461778 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.462023 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-log" containerID="cri-o://d76598e60d3a3646bdbc321f562f55f8c8d51c2d903f6c39ad73774883cc7ec3" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.464843 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-httpd" containerID="cri-o://39c3f98bf3ab5af00e017cc7b7b4cbeb1fef7cb815739f4da85c995009b67708" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.508433 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.533347 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementcbd1-account-delete-sbqzp"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.548975 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.566143 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.580204 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.580263 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.611083 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" (UID: "c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.628054 4958 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Oct 08 06:56:39 crc kubenswrapper[4958]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 08 06:56:39 crc kubenswrapper[4958]: + source /usr/local/bin/container-scripts/functions Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNBridge=br-int Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNRemote=tcp:localhost:6642 Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNEncapType=geneve Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNAvailabilityZones= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ EnableChassisAsGateway=true Oct 08 06:56:39 crc kubenswrapper[4958]: ++ PhysicalNetworks= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNHostName= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 08 06:56:39 crc kubenswrapper[4958]: ++ ovs_dir=/var/lib/openvswitch Oct 08 06:56:39 crc kubenswrapper[4958]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 08 06:56:39 crc kubenswrapper[4958]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 08 06:56:39 crc kubenswrapper[4958]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + cleanup_ovsdb_server_semaphore Oct 08 06:56:39 crc kubenswrapper[4958]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 06:56:39 crc kubenswrapper[4958]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 08 06:56:39 crc kubenswrapper[4958]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-p4rkp" message=< Oct 08 06:56:39 crc kubenswrapper[4958]: Exiting ovsdb-server (5) [ OK ] Oct 08 06:56:39 crc kubenswrapper[4958]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 08 06:56:39 crc kubenswrapper[4958]: + source /usr/local/bin/container-scripts/functions Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNBridge=br-int Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNRemote=tcp:localhost:6642 Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNEncapType=geneve Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNAvailabilityZones= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ EnableChassisAsGateway=true Oct 08 06:56:39 crc kubenswrapper[4958]: ++ PhysicalNetworks= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNHostName= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 08 06:56:39 crc kubenswrapper[4958]: ++ ovs_dir=/var/lib/openvswitch Oct 08 06:56:39 crc kubenswrapper[4958]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 08 06:56:39 crc kubenswrapper[4958]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 08 06:56:39 crc kubenswrapper[4958]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + cleanup_ovsdb_server_semaphore Oct 08 06:56:39 crc kubenswrapper[4958]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 06:56:39 crc kubenswrapper[4958]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 08 06:56:39 crc kubenswrapper[4958]: > Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.628094 4958 kuberuntime_container.go:691] "PreStop hook failed" err=< Oct 08 06:56:39 crc kubenswrapper[4958]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Oct 08 06:56:39 crc kubenswrapper[4958]: + source /usr/local/bin/container-scripts/functions Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNBridge=br-int Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNRemote=tcp:localhost:6642 Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNEncapType=geneve Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNAvailabilityZones= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ EnableChassisAsGateway=true Oct 08 06:56:39 crc kubenswrapper[4958]: ++ PhysicalNetworks= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ OVNHostName= Oct 08 06:56:39 crc kubenswrapper[4958]: ++ DB_FILE=/etc/openvswitch/conf.db Oct 08 06:56:39 crc kubenswrapper[4958]: ++ ovs_dir=/var/lib/openvswitch Oct 08 06:56:39 crc kubenswrapper[4958]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Oct 08 06:56:39 crc kubenswrapper[4958]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Oct 08 06:56:39 crc kubenswrapper[4958]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + sleep 0.5 Oct 08 06:56:39 crc kubenswrapper[4958]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Oct 08 06:56:39 crc kubenswrapper[4958]: + cleanup_ovsdb_server_semaphore Oct 08 06:56:39 crc kubenswrapper[4958]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Oct 08 06:56:39 crc kubenswrapper[4958]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Oct 08 06:56:39 crc kubenswrapper[4958]: > pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" containerID="cri-o://227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.628126 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" containerID="cri-o://227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" gracePeriod=28 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.646915 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b66abd6-edd9-4689-b24a-4803735ee82c" path="/var/lib/kubelet/pods/1b66abd6-edd9-4689-b24a-4803735ee82c/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.647531 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31307abc-8d4b-4e83-a624-1720b6b342c4" path="/var/lib/kubelet/pods/31307abc-8d4b-4e83-a624-1720b6b342c4/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.648259 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fcbbb25-135f-4269-86a1-359e0b962438" path="/var/lib/kubelet/pods/3fcbbb25-135f-4269-86a1-359e0b962438/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.648755 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43525d65-17da-4bd4-a6bd-c21781daf8ff" path="/var/lib/kubelet/pods/43525d65-17da-4bd4-a6bd-c21781daf8ff/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.649321 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f07f416-2847-46dd-b003-1cb2f1a9dda9" path="/var/lib/kubelet/pods/4f07f416-2847-46dd-b003-1cb2f1a9dda9/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.650542 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530f954e-7750-4f47-896a-7568c626c8ac" path="/var/lib/kubelet/pods/530f954e-7750-4f47-896a-7568c626c8ac/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.651077 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0793ca-c021-446b-914c-06d31ff87445" path="/var/lib/kubelet/pods/ba0793ca-c021-446b-914c-06d31ff87445/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.652367 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc06996f-e37d-472a-8912-683dbc0049a5" path="/var/lib/kubelet/pods/bc06996f-e37d-472a-8912-683dbc0049a5/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.653009 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa59ca9-7979-4892-90d5-6b4f8b374583" path="/var/lib/kubelet/pods/bfa59ca9-7979-4892-90d5-6b4f8b374583/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.653588 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19fed95-a8e9-4e26-b7b8-d6d2f105578a" path="/var/lib/kubelet/pods/e19fed95-a8e9-4e26-b7b8-d6d2f105578a/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.654856 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98db36e-5db6-40ec-a540-3b50b4ae0749" path="/var/lib/kubelet/pods/f98db36e-5db6-40ec-a540-3b50b4ae0749/volumes" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.656061 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-cbd1-account-create-4rfv9"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.656090 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-cbd1-account-create-4rfv9"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.680154 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3 is running failed: container process not found" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" cmd=["/usr/bin/pidof","ovsdb-server"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.680711 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3 is running failed: container process not found" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" cmd=["/usr/bin/pidof","ovsdb-server"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.681277 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3 is running failed: container process not found" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" cmd=["/usr/bin/pidof","ovsdb-server"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.681306 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="ovsdbserver-nb" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.684724 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.684906 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-log" containerID="cri-o://8880871fe8295b852d41b427ce6abdf7eaf0a061647089a2c0dbf322aafca26b" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.685637 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-httpd" containerID="cri-o://5a4d814284dc7005c70da55ad6372b1b9b0faa25b4caef46944a4c4b825edf42" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.695744 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-845b57c9c7-mn8f6"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.696172 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-845b57c9c7-mn8f6" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-api" containerID="cri-o://84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.696541 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-845b57c9c7-mn8f6" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-httpd" containerID="cri-o://e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.709801 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4rgqh"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.713490 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.719594 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4rgqh"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.736169 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7885-account-create-x4rcb"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.736301 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7885-account-create-x4rcb"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.743762 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-txddw"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.759837 4958 log.go:32] "ExecSync cmd from runtime service failed" err=< Oct 08 06:56:39 crc kubenswrapper[4958]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Oct 08 06:56:39 crc kubenswrapper[4958]: fail startup Oct 08 06:56:39 crc kubenswrapper[4958]: , stdout: , stderr: , exit code -1 Oct 08 06:56:39 crc kubenswrapper[4958]: > containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.759997 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-txddw"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.760025 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8595-account-create-snnb5"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.772876 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.778268 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8595-account-create-snnb5"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.778307 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:39 crc kubenswrapper[4958]: E1008 06:56:39.778369 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.790168 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.805779 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.806183 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-log" containerID="cri-o://0416399ef594017585c375dbf306aaebef9247c5c069a91faaad5a921511a654" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.806557 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-metadata" containerID="cri-o://411c6d0db58b91660709b05a5f2cc376c27a480550a9418f37b91c7bf4306c89" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.838130 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.838392 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-log" containerID="cri-o://3847c1dff68faa7fdc2dc75dc40fe444ac629a5d89701750bf8d511d58fa9aba" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.838847 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-api" containerID="cri-o://f3502b89ff3b2147ee78b29363381741d58de95e5a89ce1f3ff1cff5bce4545a" gracePeriod=30 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.847675 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.866877 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fc4a-account-create-ct9gp"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.871569 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2e10-account-create-sc4mp"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.879127 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2e10-account-create-sc4mp"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.883570 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="rabbitmq" containerID="cri-o://1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf" gracePeriod=604800 Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.886543 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fc4a-account-create-ct9gp"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.892636 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mtqh6"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.904050 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mtqh6"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.940054 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-z8kq2"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.969011 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-z8kq2"] Oct 08 06:56:39 crc kubenswrapper[4958]: I1008 06:56:39.969876 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.002260 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1fc4a-account-delete-nlcgq"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.033548 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f285e309-c3e6-42ce-9f95-8302079cfd71/ovsdbserver-sb/0.log" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.033620 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.037040 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c765759f8-jn9mj"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.037233 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener-log" containerID="cri-o://1350e4ea881e68f6fe98606268360098bdeb78fb0b42909468424428303eff5c" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.037254 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener" containerID="cri-o://7226ee00569bafdcf12786ebdd5af9f03b77506f57b10ab6b33eedc12be1ca77" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.045026 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6269a952-e10d-442f-8d9f-135e16244e83/ovsdbserver-nb/0.log" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.045089 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.046668 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-d4d8d7b54-5xwrj"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.061237 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cdf478498-ptdth"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.061504 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cdf478498-ptdth" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api-log" containerID="cri-o://8706ba2579f385903f00ba5a57a567212e79efde0f18b7f2408885add9cf0427" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.061568 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cdf478498-ptdth" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api" containerID="cri-o://9c1c2c4508d314f052c9515beb0473192e5c3aa71f3ed2c51ea5f440955efe26" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.105588 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6cdccb56ff-v4lgm"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.105872 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker-log" containerID="cri-o://87bccf6b8df0d8973019540e9aeef4bc4a3f9ababf913b986323b73b1f2f4a3b" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.106786 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker" containerID="cri-o://2d749d543c294b1a2dd8f01a3b77b14cba82efc73fc6efa7214dec5fb9278949" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131244 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerName="galera" containerID="cri-o://cc0ce848d587944d5646e6c4d3f0968fdc06fb139bd8f7158d1244e8f7e79bb0" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131381 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-svc\") pod \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131489 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nchdp\" (UniqueName: \"kubernetes.io/projected/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-kube-api-access-nchdp\") pod \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131539 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-swift-storage-0\") pod \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131621 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-nb\") pod \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131658 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-sb\") pod \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.131702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-config\") pod \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\" (UID: \"b3e3061c-ca6e-43c6-ba1d-2520f28142c6\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.140585 4958 generic.go:334] "Generic (PLEG): container finished" podID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerID="1ad480ebf1447ef4dce8f921ab07bf404169d88c3b6f87c0b44604976474fd46" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.140650 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2ac8efb-5e1d-4b4c-beba-7d287a699044","Type":"ContainerDied","Data":"1ad480ebf1447ef4dce8f921ab07bf404169d88c3b6f87c0b44604976474fd46"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.141289 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.147359 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-kube-api-access-nchdp" (OuterVolumeSpecName: "kube-api-access-nchdp") pod "b3e3061c-ca6e-43c6-ba1d-2520f28142c6" (UID: "b3e3061c-ca6e-43c6-ba1d-2520f28142c6"). InnerVolumeSpecName "kube-api-access-nchdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.148183 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dbff5b58b-dsj98"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.149236 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbff5b58b-dsj98" event={"ID":"1792ed77-ae79-44a5-80a3-7a67a8031d75","Type":"ContainerStarted","Data":"6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.149271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbff5b58b-dsj98" event={"ID":"1792ed77-ae79-44a5-80a3-7a67a8031d75","Type":"ContainerStarted","Data":"6f555752fc6174edb88cbf19e289d81ff16e8cbe80c1621bcce420fe4c8ce9bf"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.151802 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.152366 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4d96bb9-5mtl4" event={"ID":"b3e3061c-ca6e-43c6-ba1d-2520f28142c6","Type":"ContainerDied","Data":"0f9e9c67892ab00ebc5c5e0f638b4951040a5238cff42c494fd70929b5d150b3"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.152395 4958 scope.go:117] "RemoveContainer" containerID="58af15037162a3e008ea15295768d765f391483c1890551a881a9bfccea4d905" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.163323 4958 generic.go:334] "Generic (PLEG): container finished" podID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerID="d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.163350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23d77fbe-4e70-428d-92b3-926fb7f5547e","Type":"ContainerDied","Data":"d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.178036 4958 generic.go:334] "Generic (PLEG): container finished" podID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerID="d76598e60d3a3646bdbc321f562f55f8c8d51c2d903f6c39ad73774883cc7ec3" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.178266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"234cbd06-f8de-4d4f-a510-8dc7e5d9db93","Type":"ContainerDied","Data":"d76598e60d3a3646bdbc321f562f55f8c8d51c2d903f6c39ad73774883cc7ec3"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.180936 4958 generic.go:334] "Generic (PLEG): container finished" podID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerID="e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.181097 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-845b57c9c7-mn8f6" event={"ID":"a720c647-fed0-4c66-83ed-ab4c03fc68ba","Type":"ContainerDied","Data":"e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.182888 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6269a952-e10d-442f-8d9f-135e16244e83/ovsdbserver-nb/0.log" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.182915 4958 generic.go:334] "Generic (PLEG): container finished" podID="6269a952-e10d-442f-8d9f-135e16244e83" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.183026 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6269a952-e10d-442f-8d9f-135e16244e83","Type":"ContainerDied","Data":"fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.183072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6269a952-e10d-442f-8d9f-135e16244e83","Type":"ContainerDied","Data":"82989b317ef552d6740b405555b3ca3cd671d9b3456c6e550fdd36203bf74f89"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.183045 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.188592 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f285e309-c3e6-42ce-9f95-8302079cfd71/ovsdbserver-sb/0.log" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.188635 4958 generic.go:334] "Generic (PLEG): container finished" podID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerID="50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.188742 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.188933 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f285e309-c3e6-42ce-9f95-8302079cfd71","Type":"ContainerDied","Data":"50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.189091 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f285e309-c3e6-42ce-9f95-8302079cfd71","Type":"ContainerDied","Data":"2c8266d18ea0ed418fc8ba9806be17c4d5db0ed9a5d5e700f44fbd75eb24dc61"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.202595 4958 scope.go:117] "RemoveContainer" containerID="68cda18cb7a1f232876eb4eae773f044c5139e874cd3c645d33bf84495525943" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210687 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="96600bb67add0f430f3d3f7bc50bb148f53a95e2d96aa280884db2079910fcc5" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210738 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="822bb196cd354e0752c30e33f034860b8aa7c4cf0eef390c7e6067b9260d966e" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210747 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="21df3155fa52a2087045a5927cde073b2ca8d2f30fa88ba4c74a281b2f3fb7bc" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210755 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="b608a4dbe063d74c26342e7c42bce09e30df95c744884718ee615000db2bfb36" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210763 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="86bfed71cefd8eec097df2c1d8e6ba77ccea56ba62521cec70c88418d4a40639" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210770 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="724ed394bbd6e962a7c26ddc942e11038dcd01c833577f0f40fabd8ae1c82655" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210778 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="ef947aa4d03942a34c6bc85ef6fb4b0f2207baa004e3f6b80e29968a08b1d8cb" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210786 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="f6755a074a4c40620709a87ac6599a018ae64733e179e086ef57f6d9ce9dc4d0" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210794 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="66d5f937dbd1abd5a08b1b2ddf3cc40f3eb8fc2793d2eff9531136500ae61e84" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210801 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="0a5a60f42124a354930e3b3306d5ebb23d9d56694f6a482b36b814b7a29405e6" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210807 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="c13727bb3092a9324ff420e4b08230ab8156bbd4c580c0f6905a950356a2b45b" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210840 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="f0de83255ba3108743dca08b70ec1668b51eb795dc13a54cf5cbf4041775bd0b" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210847 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="4afa360de4cdd22485317e27cb843f86474d888319ad6c43a75e14506d8d3331" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210854 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="240aa008ea9290a60c7264968b55d9ba55c52133e52044e5dfbe81d67bf1b1b8" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210852 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"96600bb67add0f430f3d3f7bc50bb148f53a95e2d96aa280884db2079910fcc5"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210888 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"822bb196cd354e0752c30e33f034860b8aa7c4cf0eef390c7e6067b9260d966e"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210924 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"21df3155fa52a2087045a5927cde073b2ca8d2f30fa88ba4c74a281b2f3fb7bc"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"b608a4dbe063d74c26342e7c42bce09e30df95c744884718ee615000db2bfb36"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"86bfed71cefd8eec097df2c1d8e6ba77ccea56ba62521cec70c88418d4a40639"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210982 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"724ed394bbd6e962a7c26ddc942e11038dcd01c833577f0f40fabd8ae1c82655"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.210990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"ef947aa4d03942a34c6bc85ef6fb4b0f2207baa004e3f6b80e29968a08b1d8cb"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211000 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"f6755a074a4c40620709a87ac6599a018ae64733e179e086ef57f6d9ce9dc4d0"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211007 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"66d5f937dbd1abd5a08b1b2ddf3cc40f3eb8fc2793d2eff9531136500ae61e84"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211016 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"0a5a60f42124a354930e3b3306d5ebb23d9d56694f6a482b36b814b7a29405e6"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"c13727bb3092a9324ff420e4b08230ab8156bbd4c580c0f6905a950356a2b45b"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211033 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"f0de83255ba3108743dca08b70ec1668b51eb795dc13a54cf5cbf4041775bd0b"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"4afa360de4cdd22485317e27cb843f86474d888319ad6c43a75e14506d8d3331"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.211048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"240aa008ea9290a60c7264968b55d9ba55c52133e52044e5dfbe81d67bf1b1b8"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.214957 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.216586 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6696f545c5-2j7vj" event={"ID":"7c32a9e6-d21d-422f-914d-c3e9de16a0d5","Type":"ContainerStarted","Data":"73cd28951909d407d4f77bb645d10d3b811f295b0644678badce6f10f5e9cf0d"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.222401 4958 generic.go:334] "Generic (PLEG): container finished" podID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" exitCode=0 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.222464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerDied","Data":"227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.228923 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6696f545c5-2j7vj"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233107 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbzkb\" (UniqueName: \"kubernetes.io/projected/f285e309-c3e6-42ce-9f95-8302079cfd71-kube-api-access-vbzkb\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233333 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-ovsdbserver-nb-tls-certs\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-combined-ca-bundle\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233566 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-metrics-certs-tls-certs\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-scripts\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233762 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdb-rundir\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233857 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-config\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.233971 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6269a952-e10d-442f-8d9f-135e16244e83-ovsdb-rundir\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234155 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-config\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-scripts\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234352 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-metrics-certs-tls-certs\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234631 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zpp7\" (UniqueName: \"kubernetes.io/projected/6269a952-e10d-442f-8d9f-135e16244e83-kube-api-access-5zpp7\") pod \"6269a952-e10d-442f-8d9f-135e16244e83\" (UID: \"6269a952-e10d-442f-8d9f-135e16244e83\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdbserver-sb-tls-certs\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.234907 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-combined-ca-bundle\") pod \"f285e309-c3e6-42ce-9f95-8302079cfd71\" (UID: \"f285e309-c3e6-42ce-9f95-8302079cfd71\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.235327 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nchdp\" (UniqueName: \"kubernetes.io/projected/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-kube-api-access-nchdp\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.235508 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.235594 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data podName:442b1534-27bc-4d6d-be46-1ea5689c290f nodeName:}" failed. No retries permitted until 2025-10-08 06:56:42.235579791 +0000 UTC m=+1345.365272382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data") pod "rabbitmq-cell1-server-0" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f") : configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.236143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6269a952-e10d-442f-8d9f-135e16244e83-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.237164 4958 generic.go:334] "Generic (PLEG): container finished" podID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerID="3847c1dff68faa7fdc2dc75dc40fe444ac629a5d89701750bf8d511d58fa9aba" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.237239 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65112aa3-67bb-47a4-bc56-241ce61eff7b","Type":"ContainerDied","Data":"3847c1dff68faa7fdc2dc75dc40fe444ac629a5d89701750bf8d511d58fa9aba"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.239362 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-scripts" (OuterVolumeSpecName: "scripts") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.239920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-config" (OuterVolumeSpecName: "config") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.241339 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.242143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-scripts" (OuterVolumeSpecName: "scripts") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.242593 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.242667 4958 scope.go:117] "RemoveContainer" containerID="81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.243273 4958 generic.go:334] "Generic (PLEG): container finished" podID="4b9c2255-61f9-4319-93b1-138600df6985" containerID="bcbeedc0e126fa86f86f6a65d8d01563364649a3df6ebc1fda56bb28dd49d272" exitCode=137 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.243316 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.243449 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.243755 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-config" (OuterVolumeSpecName: "config") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.243970 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.244157 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c106a441-1a40-4bee-9317-b6957f8a6c94" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://455c494d9988f4d2b518099bd3cb05171a0ceaffd36f37f6d69ded313d75a5e2" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.250925 4958 generic.go:334] "Generic (PLEG): container finished" podID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerID="8880871fe8295b852d41b427ce6abdf7eaf0a061647089a2c0dbf322aafca26b" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.251014 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07431828-0ed3-42a8-9c9c-fdcdb98c854b","Type":"ContainerDied","Data":"8880871fe8295b852d41b427ce6abdf7eaf0a061647089a2c0dbf322aafca26b"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.251154 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6269a952-e10d-442f-8d9f-135e16244e83-kube-api-access-5zpp7" (OuterVolumeSpecName: "kube-api-access-5zpp7") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "kube-api-access-5zpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.253987 4958 generic.go:334] "Generic (PLEG): container finished" podID="90fd75a5-0719-4f7b-9103-d76319815535" containerID="0416399ef594017585c375dbf306aaebef9247c5c069a91faaad5a921511a654" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.254034 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90fd75a5-0719-4f7b-9103-d76319815535","Type":"ContainerDied","Data":"0416399ef594017585c375dbf306aaebef9247c5c069a91faaad5a921511a654"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.255859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-spvp9" event={"ID":"c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7","Type":"ContainerDied","Data":"3cd6402ff4ac1f3d36d58b1c18b30dfe0aacd9a465588b158e2c5e3ce04e53f6"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.255932 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-spvp9" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.268626 4958 generic.go:334] "Generic (PLEG): container finished" podID="70e48003-108c-4de3-be7e-81946556e25e" containerID="d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722" exitCode=143 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.268731 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tvw22" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.269730 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-687454697b-jdsn4" event={"ID":"70e48003-108c-4de3-be7e-81946556e25e","Type":"ContainerDied","Data":"d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722"} Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.277427 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.277682 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c22a5d6b-9ca7-4f30-b997-e28ae554a8be" containerName="nova-scheduler-scheduler" containerID="cri-o://749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.279621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f285e309-c3e6-42ce-9f95-8302079cfd71-kube-api-access-vbzkb" (OuterVolumeSpecName: "kube-api-access-vbzkb") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "kube-api-access-vbzkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.298682 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="rabbitmq" containerID="cri-o://a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328" gracePeriod=604800 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.309301 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6696f545c5-2j7vj"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.327498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-config" (OuterVolumeSpecName: "config") pod "b3e3061c-ca6e-43c6-ba1d-2520f28142c6" (UID: "b3e3061c-ca6e-43c6-ba1d-2520f28142c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.327700 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3e3061c-ca6e-43c6-ba1d-2520f28142c6" (UID: "b3e3061c-ca6e-43c6-ba1d-2520f28142c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.334481 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.334717 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" containerName="nova-cell1-conductor-conductor" containerID="cri-o://6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.337699 4958 scope.go:117] "RemoveContainer" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.339160 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config\") pod \"4b9c2255-61f9-4319-93b1-138600df6985\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.339263 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-combined-ca-bundle\") pod \"4b9c2255-61f9-4319-93b1-138600df6985\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.339330 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrppm\" (UniqueName: \"kubernetes.io/projected/4b9c2255-61f9-4319-93b1-138600df6985-kube-api-access-xrppm\") pod \"4b9c2255-61f9-4319-93b1-138600df6985\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.339814 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config-secret\") pod \"4b9c2255-61f9-4319-93b1-138600df6985\" (UID: \"4b9c2255-61f9-4319-93b1-138600df6985\") " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340308 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340330 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340341 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zpp7\" (UniqueName: \"kubernetes.io/projected/6269a952-e10d-442f-8d9f-135e16244e83-kube-api-access-5zpp7\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340350 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340361 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbzkb\" (UniqueName: \"kubernetes.io/projected/f285e309-c3e6-42ce-9f95-8302079cfd71-kube-api-access-vbzkb\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340371 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340382 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340392 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340402 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6269a952-e10d-442f-8d9f-135e16244e83-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340411 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6269a952-e10d-442f-8d9f-135e16244e83-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340426 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.340434 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f285e309-c3e6-42ce-9f95-8302079cfd71-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.341880 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qs945"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.344453 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3e3061c-ca6e-43c6-ba1d-2520f28142c6" (UID: "b3e3061c-ca6e-43c6-ba1d-2520f28142c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.347600 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qs945"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.353109 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9c2255-61f9-4319-93b1-138600df6985-kube-api-access-xrppm" (OuterVolumeSpecName: "kube-api-access-xrppm") pod "4b9c2255-61f9-4319-93b1-138600df6985" (UID: "4b9c2255-61f9-4319-93b1-138600df6985"). InnerVolumeSpecName "kube-api-access-xrppm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.360375 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jbvrr"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.365076 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.365268 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.379452 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jbvrr"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.401187 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3e3061c-ca6e-43c6-ba1d-2520f28142c6" (UID: "b3e3061c-ca6e-43c6-ba1d-2520f28142c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.409504 4958 scope.go:117] "RemoveContainer" containerID="81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd" Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.410785 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd\": container with ID starting with 81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd not found: ID does not exist" containerID="81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.410839 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd"} err="failed to get container status \"81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd\": rpc error: code = NotFound desc = could not find container \"81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd\": container with ID starting with 81396f72e8d2e4a1c6da156a645037fa2138f319f380e9ba53045e95eba2e4bd not found: ID does not exist" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.410866 4958 scope.go:117] "RemoveContainer" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.414071 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3\": container with ID starting with fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3 not found: ID does not exist" containerID="fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.414095 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3"} err="failed to get container status \"fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3\": rpc error: code = NotFound desc = could not find container \"fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3\": container with ID starting with fc440d9540e43a15fbf23cd6b48b8e2c3f62da0457046dbdd9a2c110424360b3 not found: ID does not exist" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.414114 4958 scope.go:117] "RemoveContainer" containerID="1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.414620 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3e3061c-ca6e-43c6-ba1d-2520f28142c6" (UID: "b3e3061c-ca6e-43c6-ba1d-2520f28142c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.423925 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.429265 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.436055 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.436592 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.438309 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.439668 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.439692 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerName="nova-cell0-conductor-conductor" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441826 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441845 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrppm\" (UniqueName: \"kubernetes.io/projected/4b9c2255-61f9-4319-93b1-138600df6985-kube-api-access-xrppm\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441855 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441863 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441872 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441879 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3e3061c-ca6e-43c6-ba1d-2520f28142c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441887 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.441904 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-d4d8d7b54-5xwrj"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.443391 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4b9c2255-61f9-4319-93b1-138600df6985" (UID: "4b9c2255-61f9-4319-93b1-138600df6985"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.443981 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.452073 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-spvp9"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.464866 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-spvp9"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.473926 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.483626 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-tvw22"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.489374 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b9c2255-61f9-4319-93b1-138600df6985" (UID: "4b9c2255-61f9-4319-93b1-138600df6985"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.490453 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-tvw22"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.498508 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cindere33b-account-delete-6brfz"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.503631 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican0295-account-delete-dzz74"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.508569 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementcbd1-account-delete-sbqzp"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.513516 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1fc4a-account-delete-nlcgq"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.529813 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4b9c2255-61f9-4319-93b1-138600df6985" (UID: "4b9c2255-61f9-4319-93b1-138600df6985"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.536897 4958 scope.go:117] "RemoveContainer" containerID="50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.544107 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.544134 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.544145 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.544153 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.544162 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b9c2255-61f9-4319-93b1-138600df6985-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.558698 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapic226-account-delete-g5pmv"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.586221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "6269a952-e10d-442f-8d9f-135e16244e83" (UID: "6269a952-e10d-442f-8d9f-135e16244e83"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.591109 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.599839 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f285e309-c3e6-42ce-9f95-8302079cfd71" (UID: "f285e309-c3e6-42ce-9f95-8302079cfd71"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:40 crc kubenswrapper[4958]: W1008 06:56:40.614589 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc32b7167_96c2_4cf2_b330_54562c181940.slice/crio-b388821f7277929e58f425f76d06c1e0ec86c6cd27c0b568206c440e70db5d5e WatchSource:0}: Error finding container b388821f7277929e58f425f76d06c1e0ec86c6cd27c0b568206c440e70db5d5e: Status 404 returned error can't find the container with id b388821f7277929e58f425f76d06c1e0ec86c6cd27c0b568206c440e70db5d5e Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.646090 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.646123 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6269a952-e10d-442f-8d9f-135e16244e83-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.646132 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f285e309-c3e6-42ce-9f95-8302079cfd71-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.696569 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-67799bdf69-qlb9s"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.696882 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-67799bdf69-qlb9s" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-httpd" containerID="cri-o://20c3267044d4ea08ed1d508a80b0aeb04cc872da56823319bbdec02266bd4cf3" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.697280 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-67799bdf69-qlb9s" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-server" containerID="cri-o://6a833e161a2028e949de920ad2fcf2e95b29dc9406b67da751b879f857c083e8" gracePeriod=30 Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.840196 4958 scope.go:117] "RemoveContainer" containerID="1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693" Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.845112 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693\": container with ID starting with 1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693 not found: ID does not exist" containerID="1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.845169 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693"} err="failed to get container status \"1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693\": rpc error: code = NotFound desc = could not find container \"1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693\": container with ID starting with 1a5b26a966d67867d27cb570fdf7f9fa9f4464f8d556c23369640c4497114693 not found: ID does not exist" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.845193 4958 scope.go:117] "RemoveContainer" containerID="50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9" Oct 08 06:56:40 crc kubenswrapper[4958]: E1008 06:56:40.848444 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9\": container with ID starting with 50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9 not found: ID does not exist" containerID="50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.848476 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9"} err="failed to get container status \"50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9\": rpc error: code = NotFound desc = could not find container \"50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9\": container with ID starting with 50a78d2d43555c7f649d9e67e4454561ca2477641c1beb7bd8e5c77a400198a9 not found: ID does not exist" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.848493 4958 scope.go:117] "RemoveContainer" containerID="bcbeedc0e126fa86f86f6a65d8d01563364649a3df6ebc1fda56bb28dd49d272" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.897494 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-5mtl4"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.908837 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d4d96bb9-5mtl4"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.926439 4958 scope.go:117] "RemoveContainer" containerID="b2fa4137f0b797f8d0358b35176b6e4fe70f8f968a53ff5dc07e48e43e721d8a" Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.951371 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 06:56:40 crc kubenswrapper[4958]: I1008 06:56:40.951425 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.002823 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.026028 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 06:56:41 crc kubenswrapper[4958]: E1008 06:56:41.181258 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 06:56:41 crc kubenswrapper[4958]: E1008 06:56:41.181329 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data podName:8f931d71-9f8f-4755-a793-ca326e423199 nodeName:}" failed. No retries permitted until 2025-10-08 06:56:45.181310776 +0000 UTC m=+1348.311003377 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data") pod "rabbitmq-server-0" (UID: "8f931d71-9f8f-4755-a793-ca326e423199") : configmap "rabbitmq-config-data" not found Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.306503 4958 generic.go:334] "Generic (PLEG): container finished" podID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerID="cc0ce848d587944d5646e6c4d3f0968fdc06fb139bd8f7158d1244e8f7e79bb0" exitCode=0 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.308896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1091f62d-2fa6-4b93-87ce-8c0fbcc23987","Type":"ContainerDied","Data":"cc0ce848d587944d5646e6c4d3f0968fdc06fb139bd8f7158d1244e8f7e79bb0"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.316570 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f688514-2336-4067-bb66-8bc690a2da30" containerID="6a833e161a2028e949de920ad2fcf2e95b29dc9406b67da751b879f857c083e8" exitCode=0 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.316590 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f688514-2336-4067-bb66-8bc690a2da30" containerID="20c3267044d4ea08ed1d508a80b0aeb04cc872da56823319bbdec02266bd4cf3" exitCode=0 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.316622 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67799bdf69-qlb9s" event={"ID":"1f688514-2336-4067-bb66-8bc690a2da30","Type":"ContainerDied","Data":"6a833e161a2028e949de920ad2fcf2e95b29dc9406b67da751b879f857c083e8"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.316638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67799bdf69-qlb9s" event={"ID":"1f688514-2336-4067-bb66-8bc690a2da30","Type":"ContainerDied","Data":"20c3267044d4ea08ed1d508a80b0aeb04cc872da56823319bbdec02266bd4cf3"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.327261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapic226-account-delete-g5pmv" event={"ID":"c32b7167-96c2-4cf2-b330-54562c181940","Type":"ContainerStarted","Data":"b388821f7277929e58f425f76d06c1e0ec86c6cd27c0b568206c440e70db5d5e"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.350703 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" event={"ID":"b6c32c16-960f-4a65-abd5-f435d16932f0","Type":"ContainerStarted","Data":"896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.350746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" event={"ID":"b6c32c16-960f-4a65-abd5-f435d16932f0","Type":"ContainerStarted","Data":"c2751b211bda65475b8e5fe09ad88f045d9a2fe01bcc25f62e65deda39e2d2e1"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.354533 4958 generic.go:334] "Generic (PLEG): container finished" podID="c106a441-1a40-4bee-9317-b6957f8a6c94" containerID="455c494d9988f4d2b518099bd3cb05171a0ceaffd36f37f6d69ded313d75a5e2" exitCode=0 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.354621 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c106a441-1a40-4bee-9317-b6957f8a6c94","Type":"ContainerDied","Data":"455c494d9988f4d2b518099bd3cb05171a0ceaffd36f37f6d69ded313d75a5e2"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.360177 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapic226-account-delete-g5pmv" podStartSLOduration=3.360161611 podStartE2EDuration="3.360161611s" podCreationTimestamp="2025-10-08 06:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:56:41.358557798 +0000 UTC m=+1344.488250399" watchObservedRunningTime="2025-10-08 06:56:41.360161611 +0000 UTC m=+1344.489854212" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.360505 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" containerID="cri-o://6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041" gracePeriod=30 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.360746 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.360779 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.361051 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" containerID="cri-o://b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3" gracePeriod=30 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.368557 4958 generic.go:334] "Generic (PLEG): container finished" podID="94872b33-329c-42ca-9d90-09c6950dfd83" containerID="8706ba2579f385903f00ba5a57a567212e79efde0f18b7f2408885add9cf0427" exitCode=143 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.368607 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdf478498-ptdth" event={"ID":"94872b33-329c-42ca-9d90-09c6950dfd83","Type":"ContainerDied","Data":"8706ba2579f385903f00ba5a57a567212e79efde0f18b7f2408885add9cf0427"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.375535 4958 generic.go:334] "Generic (PLEG): container finished" podID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerID="1350e4ea881e68f6fe98606268360098bdeb78fb0b42909468424428303eff5c" exitCode=143 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.375577 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" event={"ID":"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac","Type":"ContainerDied","Data":"1350e4ea881e68f6fe98606268360098bdeb78fb0b42909468424428303eff5c"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.381815 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dbff5b58b-dsj98" podStartSLOduration=4.381801135 podStartE2EDuration="4.381801135s" podCreationTimestamp="2025-10-08 06:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:56:41.375910036 +0000 UTC m=+1344.505602637" watchObservedRunningTime="2025-10-08 06:56:41.381801135 +0000 UTC m=+1344.511493736" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.387475 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementcbd1-account-delete-sbqzp" event={"ID":"fab49352-e790-47df-a1c8-f1b74e2a0134","Type":"ContainerStarted","Data":"1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.387520 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementcbd1-account-delete-sbqzp" event={"ID":"fab49352-e790-47df-a1c8-f1b74e2a0134","Type":"ContainerStarted","Data":"6c05d49e35beef9c45ccd265aeed06f95bc4f8c709f289f3137a4ef1bc97b228"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.397289 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell1fc4a-account-delete-nlcgq" event={"ID":"655026ad-69aa-4867-8fc8-165d6e801ad0","Type":"ContainerStarted","Data":"5311d23a4114bdcb099db82ad29fb4013e33eb21b6af49e31009bfd7e37e7c32"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.418780 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerID="87bccf6b8df0d8973019540e9aeef4bc4a3f9ababf913b986323b73b1f2f4a3b" exitCode=143 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.418844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" event={"ID":"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef","Type":"ContainerDied","Data":"87bccf6b8df0d8973019540e9aeef4bc4a3f9ababf913b986323b73b1f2f4a3b"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.439198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6696f545c5-2j7vj" event={"ID":"7c32a9e6-d21d-422f-914d-c3e9de16a0d5","Type":"ContainerStarted","Data":"b6ff834994cbc43b81674b98260489f9066c39711a5a82c7bd7438c8b37947ac"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.439245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6696f545c5-2j7vj" event={"ID":"7c32a9e6-d21d-422f-914d-c3e9de16a0d5","Type":"ContainerStarted","Data":"9559dab0d3aabca4257ee3c9f1ea0fa80a40d4c8646a55e6458c39e238f7651c"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.439246 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6696f545c5-2j7vj" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker-log" containerID="cri-o://9559dab0d3aabca4257ee3c9f1ea0fa80a40d4c8646a55e6458c39e238f7651c" gracePeriod=30 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.439328 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6696f545c5-2j7vj" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker" containerID="cri-o://b6ff834994cbc43b81674b98260489f9066c39711a5a82c7bd7438c8b37947ac" gracePeriod=30 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.453806 4958 generic.go:334] "Generic (PLEG): container finished" podID="b562d3e4-572e-48d5-9257-4927ef68e988" containerID="b5e769123b59e453ebbad05152ae521b4e2607deec0125453ab24a8653ed80fe" exitCode=0 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.453871 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0295-account-delete-dzz74" event={"ID":"b562d3e4-572e-48d5-9257-4927ef68e988","Type":"ContainerDied","Data":"b5e769123b59e453ebbad05152ae521b4e2607deec0125453ab24a8653ed80fe"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.453894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0295-account-delete-dzz74" event={"ID":"b562d3e4-572e-48d5-9257-4927ef68e988","Type":"ContainerStarted","Data":"c7b235602138d292f67c3a263ba49d70aa855922ef5ff4426599539893cd5017"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.455450 4958 generic.go:334] "Generic (PLEG): container finished" podID="50505fb3-8aa5-43de-a8f1-617501e46822" containerID="ff34419c4d7cec82e16f942fdabeade91c2dcb87c8267b8fa131d040fd42cd0f" exitCode=0 Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.455476 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindere33b-account-delete-6brfz" event={"ID":"50505fb3-8aa5-43de-a8f1-617501e46822","Type":"ContainerDied","Data":"ff34419c4d7cec82e16f942fdabeade91c2dcb87c8267b8fa131d040fd42cd0f"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.455488 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindere33b-account-delete-6brfz" event={"ID":"50505fb3-8aa5-43de-a8f1-617501e46822","Type":"ContainerStarted","Data":"e153cb711538309f8553fa4288b615dfb5fa76aed500ea71425bc83c911aac62"} Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.465936 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6696f545c5-2j7vj" podStartSLOduration=4.465921054 podStartE2EDuration="4.465921054s" podCreationTimestamp="2025-10-08 06:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:56:41.46391207 +0000 UTC m=+1344.593604671" watchObservedRunningTime="2025-10-08 06:56:41.465921054 +0000 UTC m=+1344.595613655" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.596005 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4975c7-bc87-45a0-8bd9-878d3610ecc4" path="/var/lib/kubelet/pods/1c4975c7-bc87-45a0-8bd9-878d3610ecc4/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.596776 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25549551-2ec2-4cf2-800a-b3da40ce78f0" path="/var/lib/kubelet/pods/25549551-2ec2-4cf2-800a-b3da40ce78f0/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.597233 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c" path="/var/lib/kubelet/pods/2d1c446a-1e89-4c05-b2c7-3d3d1707fb6c/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.598019 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5576e3-fbd8-493d-b8b5-14417b04caf4" path="/var/lib/kubelet/pods/2f5576e3-fbd8-493d-b8b5-14417b04caf4/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.598495 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8250e7-12f5-45b5-a5c3-2fbc770df268" path="/var/lib/kubelet/pods/4b8250e7-12f5-45b5-a5c3-2fbc770df268/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.599019 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9c2255-61f9-4319-93b1-138600df6985" path="/var/lib/kubelet/pods/4b9c2255-61f9-4319-93b1-138600df6985/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.599937 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5971bc9c-45ee-4ccb-aef5-290f51ac13ba" path="/var/lib/kubelet/pods/5971bc9c-45ee-4ccb-aef5-290f51ac13ba/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.600651 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6269a952-e10d-442f-8d9f-135e16244e83" path="/var/lib/kubelet/pods/6269a952-e10d-442f-8d9f-135e16244e83/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.603500 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5" path="/var/lib/kubelet/pods/7c696ce6-e4e0-4d37-aa1a-d49c8477cfc5/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.604173 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db6755b-4975-45c8-aab5-594b40778231" path="/var/lib/kubelet/pods/7db6755b-4975-45c8-aab5-594b40778231/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.604723 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c7170a-4267-4318-9772-c68efe1cb7e4" path="/var/lib/kubelet/pods/a3c7170a-4267-4318-9772-c68efe1cb7e4/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.605239 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a669302e-8748-424e-9be1-66e595fb1dd1" path="/var/lib/kubelet/pods/a669302e-8748-424e-9be1-66e595fb1dd1/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.610570 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" path="/var/lib/kubelet/pods/b3e3061c-ca6e-43c6-ba1d-2520f28142c6/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.611149 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" path="/var/lib/kubelet/pods/c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.611622 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfae700b-a979-49b7-839d-bd3aa3363423" path="/var/lib/kubelet/pods/cfae700b-a979-49b7-839d-bd3aa3363423/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.612639 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd836d54-e3a7-45f2-a3ec-21b73cc38496" path="/var/lib/kubelet/pods/dd836d54-e3a7-45f2-a3ec-21b73cc38496/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.613433 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" path="/var/lib/kubelet/pods/f285e309-c3e6-42ce-9f95-8302079cfd71/volumes" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.770426 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.787796 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.827180 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.871912 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915375 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-generated\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915422 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hccq\" (UniqueName: \"kubernetes.io/projected/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kube-api-access-6hccq\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915445 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-combined-ca-bundle\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915473 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4m2c\" (UniqueName: \"kubernetes.io/projected/c106a441-1a40-4bee-9317-b6957f8a6c94-kube-api-access-l4m2c\") pod \"c106a441-1a40-4bee-9317-b6957f8a6c94\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-config-data\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915762 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-nova-novncproxy-tls-certs\") pod \"c106a441-1a40-4bee-9317-b6957f8a6c94\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-vencrypt-tls-certs\") pod \"c106a441-1a40-4bee-9317-b6957f8a6c94\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915820 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-secrets\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915851 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-etc-swift\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915873 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-config-data\") pod \"c106a441-1a40-4bee-9317-b6957f8a6c94\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-internal-tls-certs\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915926 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kolla-config\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.915984 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-default\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916008 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-log-httpd\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916043 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-public-tls-certs\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-operator-scripts\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916086 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-combined-ca-bundle\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916107 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgsc8\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-kube-api-access-cgsc8\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-galera-tls-certs\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916176 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-combined-ca-bundle\") pod \"c106a441-1a40-4bee-9317-b6957f8a6c94\" (UID: \"c106a441-1a40-4bee-9317-b6957f8a6c94\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.916194 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\" (UID: \"1091f62d-2fa6-4b93-87ce-8c0fbcc23987\") " Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.933814 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kube-api-access-6hccq" (OuterVolumeSpecName: "kube-api-access-6hccq") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "kube-api-access-6hccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.935013 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.935934 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.936502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c106a441-1a40-4bee-9317-b6957f8a6c94-kube-api-access-l4m2c" (OuterVolumeSpecName: "kube-api-access-l4m2c") pod "c106a441-1a40-4bee-9317-b6957f8a6c94" (UID: "c106a441-1a40-4bee-9317-b6957f8a6c94"). InnerVolumeSpecName "kube-api-access-l4m2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.936908 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.938662 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.944143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.966095 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-secrets" (OuterVolumeSpecName: "secrets") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:41 crc kubenswrapper[4958]: I1008 06:56:41.997606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.018013 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llfqz\" (UniqueName: \"kubernetes.io/projected/fab49352-e790-47df-a1c8-f1b74e2a0134-kube-api-access-llfqz\") pod \"fab49352-e790-47df-a1c8-f1b74e2a0134\" (UID: \"fab49352-e790-47df-a1c8-f1b74e2a0134\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.020646 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-run-httpd\") pod \"1f688514-2336-4067-bb66-8bc690a2da30\" (UID: \"1f688514-2336-4067-bb66-8bc690a2da30\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021209 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021482 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021496 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021507 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021516 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021527 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021537 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hccq\" (UniqueName: \"kubernetes.io/projected/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kube-api-access-6hccq\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021546 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4m2c\" (UniqueName: \"kubernetes.io/projected/c106a441-1a40-4bee-9317-b6957f8a6c94-kube-api-access-l4m2c\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021553 4958 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021562 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.021569 4958 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.022422 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.032033 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-kube-api-access-cgsc8" (OuterVolumeSpecName: "kube-api-access-cgsc8") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "kube-api-access-cgsc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.050815 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab49352-e790-47df-a1c8-f1b74e2a0134-kube-api-access-llfqz" (OuterVolumeSpecName: "kube-api-access-llfqz") pod "fab49352-e790-47df-a1c8-f1b74e2a0134" (UID: "fab49352-e790-47df-a1c8-f1b74e2a0134"). InnerVolumeSpecName "kube-api-access-llfqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.050911 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.104067 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.130525 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgsc8\" (UniqueName: \"kubernetes.io/projected/1f688514-2336-4067-bb66-8bc690a2da30-kube-api-access-cgsc8\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.130558 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.130568 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llfqz\" (UniqueName: \"kubernetes.io/projected/fab49352-e790-47df-a1c8-f1b74e2a0134-kube-api-access-llfqz\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.130577 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f688514-2336-4067-bb66-8bc690a2da30-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.137061 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c106a441-1a40-4bee-9317-b6957f8a6c94" (UID: "c106a441-1a40-4bee-9317-b6957f8a6c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.155916 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.156307 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-config-data" (OuterVolumeSpecName: "config-data") pod "c106a441-1a40-4bee-9317-b6957f8a6c94" (UID: "c106a441-1a40-4bee-9317-b6957f8a6c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.167278 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.182361 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.214111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "1091f62d-2fa6-4b93-87ce-8c0fbcc23987" (UID: "1091f62d-2fa6-4b93-87ce-8c0fbcc23987"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.218030 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.231994 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.236120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.236882 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dk22\" (UniqueName: \"kubernetes.io/projected/655026ad-69aa-4867-8fc8-165d6e801ad0-kube-api-access-6dk22\") pod \"655026ad-69aa-4867-8fc8-165d6e801ad0\" (UID: \"655026ad-69aa-4867-8fc8-165d6e801ad0\") " Oct 08 06:56:42 crc kubenswrapper[4958]: E1008 06:56:42.237409 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:42 crc kubenswrapper[4958]: E1008 06:56:42.237497 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data podName:442b1534-27bc-4d6d-be46-1ea5689c290f nodeName:}" failed. No retries permitted until 2025-10-08 06:56:46.237481449 +0000 UTC m=+1349.367174040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data") pod "rabbitmq-cell1-server-0" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f") : configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.237890 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.237923 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.238038 4958 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1091f62d-2fa6-4b93-87ce-8c0fbcc23987-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.238119 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.238129 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.238138 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.238147 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.238446 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "c106a441-1a40-4bee-9317-b6957f8a6c94" (UID: "c106a441-1a40-4bee-9317-b6957f8a6c94"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.241893 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655026ad-69aa-4867-8fc8-165d6e801ad0-kube-api-access-6dk22" (OuterVolumeSpecName: "kube-api-access-6dk22") pod "655026ad-69aa-4867-8fc8-165d6e801ad0" (UID: "655026ad-69aa-4867-8fc8-165d6e801ad0"). InnerVolumeSpecName "kube-api-access-6dk22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.252101 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "c106a441-1a40-4bee-9317-b6957f8a6c94" (UID: "c106a441-1a40-4bee-9317-b6957f8a6c94"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.253125 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-config-data" (OuterVolumeSpecName: "config-data") pod "1f688514-2336-4067-bb66-8bc690a2da30" (UID: "1f688514-2336-4067-bb66-8bc690a2da30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: E1008 06:56:42.293630 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 06:56:42 crc kubenswrapper[4958]: E1008 06:56:42.297296 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 06:56:42 crc kubenswrapper[4958]: E1008 06:56:42.298349 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Oct 08 06:56:42 crc kubenswrapper[4958]: E1008 06:56:42.298378 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="ovn-northd" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339051 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-config-data\") pod \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339176 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84wnk\" (UniqueName: \"kubernetes.io/projected/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-kube-api-access-84wnk\") pod \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339318 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jgk4\" (UniqueName: \"kubernetes.io/projected/50505fb3-8aa5-43de-a8f1-617501e46822-kube-api-access-2jgk4\") pod \"50505fb3-8aa5-43de-a8f1-617501e46822\" (UID: \"50505fb3-8aa5-43de-a8f1-617501e46822\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339389 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-combined-ca-bundle\") pod \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\" (UID: \"c22a5d6b-9ca7-4f30-b997-e28ae554a8be\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339846 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f688514-2336-4067-bb66-8bc690a2da30-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339863 4958 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339878 4958 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c106a441-1a40-4bee-9317-b6957f8a6c94-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.339891 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dk22\" (UniqueName: \"kubernetes.io/projected/655026ad-69aa-4867-8fc8-165d6e801ad0-kube-api-access-6dk22\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.345384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-kube-api-access-84wnk" (OuterVolumeSpecName: "kube-api-access-84wnk") pod "c22a5d6b-9ca7-4f30-b997-e28ae554a8be" (UID: "c22a5d6b-9ca7-4f30-b997-e28ae554a8be"). InnerVolumeSpecName "kube-api-access-84wnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.359447 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50505fb3-8aa5-43de-a8f1-617501e46822-kube-api-access-2jgk4" (OuterVolumeSpecName: "kube-api-access-2jgk4") pod "50505fb3-8aa5-43de-a8f1-617501e46822" (UID: "50505fb3-8aa5-43de-a8f1-617501e46822"). InnerVolumeSpecName "kube-api-access-2jgk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.369544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c22a5d6b-9ca7-4f30-b997-e28ae554a8be" (UID: "c22a5d6b-9ca7-4f30-b997-e28ae554a8be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.397151 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-config-data" (OuterVolumeSpecName: "config-data") pod "c22a5d6b-9ca7-4f30-b997-e28ae554a8be" (UID: "c22a5d6b-9ca7-4f30-b997-e28ae554a8be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.442016 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.442216 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.442293 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84wnk\" (UniqueName: \"kubernetes.io/projected/c22a5d6b-9ca7-4f30-b997-e28ae554a8be-kube-api-access-84wnk\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.442369 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jgk4\" (UniqueName: \"kubernetes.io/projected/50505fb3-8aa5-43de-a8f1-617501e46822-kube-api-access-2jgk4\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.486906 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" event={"ID":"b6c32c16-960f-4a65-abd5-f435d16932f0","Type":"ContainerStarted","Data":"27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.487122 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener-log" containerID="cri-o://896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a" gracePeriod=30 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.487462 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener" containerID="cri-o://27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299" gracePeriod=30 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.496333 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1091f62d-2fa6-4b93-87ce-8c0fbcc23987","Type":"ContainerDied","Data":"4976781f0e0a125509b0d9ce8468e8baf512290701a9a3006a90c2b1b6b2891d"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.496373 4958 scope.go:117] "RemoveContainer" containerID="cc0ce848d587944d5646e6c4d3f0968fdc06fb139bd8f7158d1244e8f7e79bb0" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.496499 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.509365 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-67799bdf69-qlb9s" event={"ID":"1f688514-2336-4067-bb66-8bc690a2da30","Type":"ContainerDied","Data":"5941f782c9b06eb97b840f6a5808ec5a17227c3ae5cdac4fd17a32653d9882ab"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.509467 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-67799bdf69-qlb9s" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.526343 4958 generic.go:334] "Generic (PLEG): container finished" podID="fab49352-e790-47df-a1c8-f1b74e2a0134" containerID="1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b" exitCode=0 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.526409 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementcbd1-account-delete-sbqzp" event={"ID":"fab49352-e790-47df-a1c8-f1b74e2a0134","Type":"ContainerDied","Data":"1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.526431 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementcbd1-account-delete-sbqzp" event={"ID":"fab49352-e790-47df-a1c8-f1b74e2a0134","Type":"ContainerDied","Data":"6c05d49e35beef9c45ccd265aeed06f95bc4f8c709f289f3137a4ef1bc97b228"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.526488 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementcbd1-account-delete-sbqzp" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.538453 4958 generic.go:334] "Generic (PLEG): container finished" podID="c22a5d6b-9ca7-4f30-b997-e28ae554a8be" containerID="749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a" exitCode=0 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.538475 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.538547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22a5d6b-9ca7-4f30-b997-e28ae554a8be","Type":"ContainerDied","Data":"749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.538571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22a5d6b-9ca7-4f30-b997-e28ae554a8be","Type":"ContainerDied","Data":"379c8c90f59282b0e00355d6ba42d54a503edf63493282e391eea4dfbf89fa32"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.540546 4958 generic.go:334] "Generic (PLEG): container finished" podID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerID="9559dab0d3aabca4257ee3c9f1ea0fa80a40d4c8646a55e6458c39e238f7651c" exitCode=143 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.540778 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6696f545c5-2j7vj" event={"ID":"7c32a9e6-d21d-422f-914d-c3e9de16a0d5","Type":"ContainerDied","Data":"9559dab0d3aabca4257ee3c9f1ea0fa80a40d4c8646a55e6458c39e238f7651c"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.545409 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cindere33b-account-delete-6brfz" event={"ID":"50505fb3-8aa5-43de-a8f1-617501e46822","Type":"ContainerDied","Data":"e153cb711538309f8553fa4288b615dfb5fa76aed500ea71425bc83c911aac62"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.545632 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cindere33b-account-delete-6brfz" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.549766 4958 generic.go:334] "Generic (PLEG): container finished" podID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerID="ab70053ebb2c4509de5852c1600f0da9043c9b356dde235137ef97c1e51aebf6" exitCode=0 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.549823 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2ac8efb-5e1d-4b4c-beba-7d287a699044","Type":"ContainerDied","Data":"ab70053ebb2c4509de5852c1600f0da9043c9b356dde235137ef97c1e51aebf6"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.552736 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c106a441-1a40-4bee-9317-b6957f8a6c94","Type":"ContainerDied","Data":"4dd51308e54947861a5e8a3f0c8a3e77502befa6f79b092e217aa68c614faf9b"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.552922 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.559645 4958 generic.go:334] "Generic (PLEG): container finished" podID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerID="6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041" exitCode=143 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.559733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbff5b58b-dsj98" event={"ID":"1792ed77-ae79-44a5-80a3-7a67a8031d75","Type":"ContainerStarted","Data":"b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.559757 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbff5b58b-dsj98" event={"ID":"1792ed77-ae79-44a5-80a3-7a67a8031d75","Type":"ContainerDied","Data":"6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.567874 4958 generic.go:334] "Generic (PLEG): container finished" podID="c32b7167-96c2-4cf2-b330-54562c181940" containerID="ec884386fa6ec452d56d5f07cfa33c6ee7fb4c007780b98c88a134f47c7007f6" exitCode=0 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.568052 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapic226-account-delete-g5pmv" event={"ID":"c32b7167-96c2-4cf2-b330-54562c181940","Type":"ContainerDied","Data":"ec884386fa6ec452d56d5f07cfa33c6ee7fb4c007780b98c88a134f47c7007f6"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.569563 4958 generic.go:334] "Generic (PLEG): container finished" podID="655026ad-69aa-4867-8fc8-165d6e801ad0" containerID="fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909" exitCode=1 Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.569828 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1fc4a-account-delete-nlcgq" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.570900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell1fc4a-account-delete-nlcgq" event={"ID":"655026ad-69aa-4867-8fc8-165d6e801ad0","Type":"ContainerDied","Data":"5311d23a4114bdcb099db82ad29fb4013e33eb21b6af49e31009bfd7e37e7c32"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.570963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell1fc4a-account-delete-nlcgq" event={"ID":"655026ad-69aa-4867-8fc8-165d6e801ad0","Type":"ContainerDied","Data":"fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909"} Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.608337 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" podStartSLOduration=5.608314404 podStartE2EDuration="5.608314404s" podCreationTimestamp="2025-10-08 06:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 06:56:42.510383432 +0000 UTC m=+1345.640076033" watchObservedRunningTime="2025-10-08 06:56:42.608314404 +0000 UTC m=+1345.738007005" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.663607 4958 scope.go:117] "RemoveContainer" containerID="0f9dbe6fbcf00bbedd4e86c961d6a046db71fce83d895c53d0612bdd24041e8b" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.721379 4958 scope.go:117] "RemoveContainer" containerID="6a833e161a2028e949de920ad2fcf2e95b29dc9406b67da751b879f857c083e8" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.744637 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementcbd1-account-delete-sbqzp"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.766412 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementcbd1-account-delete-sbqzp"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.781183 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-67799bdf69-qlb9s"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.788647 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-67799bdf69-qlb9s"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.820991 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.828073 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.833339 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1fc4a-account-delete-nlcgq"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.840670 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.845512 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell1fc4a-account-delete-nlcgq"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-public-tls-certs\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851569 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-scripts\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851599 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data-custom\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v496q\" (UniqueName: \"kubernetes.io/projected/b2ac8efb-5e1d-4b4c-beba-7d287a699044-kube-api-access-v496q\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851655 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-combined-ca-bundle\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851672 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2ac8efb-5e1d-4b4c-beba-7d287a699044-etc-machine-id\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851709 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-internal-tls-certs\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.851742 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2ac8efb-5e1d-4b4c-beba-7d287a699044-logs\") pod \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\" (UID: \"b2ac8efb-5e1d-4b4c-beba-7d287a699044\") " Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.852559 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ac8efb-5e1d-4b4c-beba-7d287a699044-logs" (OuterVolumeSpecName: "logs") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.853003 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2ac8efb-5e1d-4b4c-beba-7d287a699044-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.864021 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ac8efb-5e1d-4b4c-beba-7d287a699044-kube-api-access-v496q" (OuterVolumeSpecName: "kube-api-access-v496q") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "kube-api-access-v496q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.870299 4958 scope.go:117] "RemoveContainer" containerID="20c3267044d4ea08ed1d508a80b0aeb04cc872da56823319bbdec02266bd4cf3" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.873271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-scripts" (OuterVolumeSpecName: "scripts") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.874884 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.893850 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.953580 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2ac8efb-5e1d-4b4c-beba-7d287a699044-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.953607 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.953617 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.953625 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v496q\" (UniqueName: \"kubernetes.io/projected/b2ac8efb-5e1d-4b4c-beba-7d287a699044-kube-api-access-v496q\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.953634 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2ac8efb-5e1d-4b4c-beba-7d287a699044-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.958245 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.977115 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.986579 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.989311 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.994199 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cindere33b-account-delete-6brfz"] Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.996028 4958 scope.go:117] "RemoveContainer" containerID="1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b" Oct 08 06:56:42 crc kubenswrapper[4958]: I1008 06:56:42.997888 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cindere33b-account-delete-6brfz"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.009277 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8wzdh"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.011697 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data" (OuterVolumeSpecName: "config-data") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.013583 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8wzdh"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.022938 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican0295-account-delete-dzz74"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.026088 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.026174 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2ac8efb-5e1d-4b4c-beba-7d287a699044" (UID: "b2ac8efb-5e1d-4b4c-beba-7d287a699044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.029424 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0295-account-create-kssqx"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.032010 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0295-account-create-kssqx"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.055003 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.055031 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.055040 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.055074 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ac8efb-5e1d-4b4c-beba-7d287a699044-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.062344 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:46052->10.217.0.202:8775: read: connection reset by peer" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.062377 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:46062->10.217.0.202:8775: read: connection reset by peer" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.171900 4958 scope.go:117] "RemoveContainer" containerID="1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b" Oct 08 06:56:43 crc kubenswrapper[4958]: E1008 06:56:43.178181 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b\": container with ID starting with 1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b not found: ID does not exist" containerID="1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.178225 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b"} err="failed to get container status \"1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b\": rpc error: code = NotFound desc = could not find container \"1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b\": container with ID starting with 1c6f043726b5ec0a1c639da4ce86e168990e5f1fa1930c221f8afcba9914825b not found: ID does not exist" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.178248 4958 scope.go:117] "RemoveContainer" containerID="749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.240594 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.252224 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.272465 4958 scope.go:117] "RemoveContainer" containerID="749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a" Oct 08 06:56:43 crc kubenswrapper[4958]: E1008 06:56:43.272910 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a\": container with ID starting with 749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a not found: ID does not exist" containerID="749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.272990 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.273017 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a"} err="failed to get container status \"749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a\": rpc error: code = NotFound desc = could not find container \"749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a\": container with ID starting with 749526d14f06e103c443cfaac61ecb5d09b9fae43de27d241f3f8c1e61eb626a not found: ID does not exist" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.273044 4958 scope.go:117] "RemoveContainer" containerID="ff34419c4d7cec82e16f942fdabeade91c2dcb87c8267b8fa131d040fd42cd0f" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.311013 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-g4wrk"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.322120 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-g4wrk"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.343982 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.344229 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-central-agent" containerID="cri-o://2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.344454 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="proxy-httpd" containerID="cri-o://ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.344554 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="sg-core" containerID="cri-o://0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.344604 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-notification-agent" containerID="cri-o://68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382289 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-scripts\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382374 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-combined-ca-bundle\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-internal-tls-certs\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382549 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-config-data\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382600 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-public-tls-certs\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382654 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkmnx\" (UniqueName: \"kubernetes.io/projected/70e48003-108c-4de3-be7e-81946556e25e-kube-api-access-jkmnx\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382732 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8pjd\" (UniqueName: \"kubernetes.io/projected/b562d3e4-572e-48d5-9257-4927ef68e988-kube-api-access-h8pjd\") pod \"b562d3e4-572e-48d5-9257-4927ef68e988\" (UID: \"b562d3e4-572e-48d5-9257-4927ef68e988\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.382833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e48003-108c-4de3-be7e-81946556e25e-logs\") pod \"70e48003-108c-4de3-be7e-81946556e25e\" (UID: \"70e48003-108c-4de3-be7e-81946556e25e\") " Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.383940 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e48003-108c-4de3-be7e-81946556e25e-logs" (OuterVolumeSpecName: "logs") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.395578 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e48003-108c-4de3-be7e-81946556e25e-kube-api-access-jkmnx" (OuterVolumeSpecName: "kube-api-access-jkmnx") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "kube-api-access-jkmnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.396089 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-scripts" (OuterVolumeSpecName: "scripts") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.402962 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c226-account-create-92j8q"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.405637 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b562d3e4-572e-48d5-9257-4927ef68e988-kube-api-access-h8pjd" (OuterVolumeSpecName: "kube-api-access-h8pjd") pod "b562d3e4-572e-48d5-9257-4927ef68e988" (UID: "b562d3e4-572e-48d5-9257-4927ef68e988"). InnerVolumeSpecName "kube-api-access-h8pjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.415529 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapic226-account-delete-g5pmv"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.454024 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c226-account-create-92j8q"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.454085 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.454751 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4303db42-6842-4d26-bf89-79755d0db57d" containerName="kube-state-metrics" containerID="cri-o://cc400959116b34b9a987bb5b45ad7f715a7f6d889a0bb90cd853ee49a2f5a81c" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.495121 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.495155 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkmnx\" (UniqueName: \"kubernetes.io/projected/70e48003-108c-4de3-be7e-81946556e25e-kube-api-access-jkmnx\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.495165 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8pjd\" (UniqueName: \"kubernetes.io/projected/b562d3e4-572e-48d5-9257-4927ef68e988-kube-api-access-h8pjd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.495174 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e48003-108c-4de3-be7e-81946556e25e-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.506310 4958 scope.go:117] "RemoveContainer" containerID="455c494d9988f4d2b518099bd3cb05171a0ceaffd36f37f6d69ded313d75a5e2" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.563083 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.563297 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="312426b0-8fb6-48ad-ba99-79b87cfcac38" containerName="memcached" containerID="cri-o://77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.574618 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-szq25"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.624898 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.657614 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-config-data" (OuterVolumeSpecName: "config-data") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.657872 4958 generic.go:334] "Generic (PLEG): container finished" podID="94872b33-329c-42ca-9d90-09c6950dfd83" containerID="9c1c2c4508d314f052c9515beb0473192e5c3aa71f3ed2c51ea5f440955efe26" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.659847 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.661358 4958 generic.go:334] "Generic (PLEG): container finished" podID="4303db42-6842-4d26-bf89-79755d0db57d" containerID="cc400959116b34b9a987bb5b45ad7f715a7f6d889a0bb90cd853ee49a2f5a81c" exitCode=2 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.690451 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0698e470-ea79-4a02-8301-bf39a58d8901" path="/var/lib/kubelet/pods/0698e470-ea79-4a02-8301-bf39a58d8901/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.691475 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" path="/var/lib/kubelet/pods/1091f62d-2fa6-4b93-87ce-8c0fbcc23987/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.692606 4958 generic.go:334] "Generic (PLEG): container finished" podID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerID="5a4d814284dc7005c70da55ad6372b1b9b0faa25b4caef46944a4c4b825edf42" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.695537 4958 generic.go:334] "Generic (PLEG): container finished" podID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerID="b6ff834994cbc43b81674b98260489f9066c39711a5a82c7bd7438c8b37947ac" exitCode=1 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.723039 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f688514-2336-4067-bb66-8bc690a2da30" path="/var/lib/kubelet/pods/1f688514-2336-4067-bb66-8bc690a2da30/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.724044 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50505fb3-8aa5-43de-a8f1-617501e46822" path="/var/lib/kubelet/pods/50505fb3-8aa5-43de-a8f1-617501e46822/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.724549 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655026ad-69aa-4867-8fc8-165d6e801ad0" path="/var/lib/kubelet/pods/655026ad-69aa-4867-8fc8-165d6e801ad0/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.725113 4958 generic.go:334] "Generic (PLEG): container finished" podID="70e48003-108c-4de3-be7e-81946556e25e" containerID="5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.725220 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85509aa5-d56b-4bd9-bcd0-9570927a885d" path="/var/lib/kubelet/pods/85509aa5-d56b-4bd9-bcd0-9570927a885d/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.725309 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-687454697b-jdsn4" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.726375 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac68cce-b443-4f57-89f7-6c7fee4fcd32" path="/var/lib/kubelet/pods/aac68cce-b443-4f57-89f7-6c7fee4fcd32/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.726931 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c106a441-1a40-4bee-9317-b6957f8a6c94" path="/var/lib/kubelet/pods/c106a441-1a40-4bee-9317-b6957f8a6c94/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.727461 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22a5d6b-9ca7-4f30-b997-e28ae554a8be" path="/var/lib/kubelet/pods/c22a5d6b-9ca7-4f30-b997-e28ae554a8be/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.728703 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01f5ddc-a7e9-40e8-b70f-107fa5130c44" path="/var/lib/kubelet/pods/f01f5ddc-a7e9-40e8-b70f-107fa5130c44/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.729300 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab49352-e790-47df-a1c8-f1b74e2a0134" path="/var/lib/kubelet/pods/fab49352-e790-47df-a1c8-f1b74e2a0134/volumes" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.733717 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.733758 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.769974 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.766627 4958 generic.go:334] "Generic (PLEG): container finished" podID="90612def-876b-4ae6-88e6-7f3de02515e6" containerID="0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060" exitCode=2 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.784584 4958 generic.go:334] "Generic (PLEG): container finished" podID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerID="896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a" exitCode=143 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.789369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70e48003-108c-4de3-be7e-81946556e25e" (UID: "70e48003-108c-4de3-be7e-81946556e25e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.802835 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerID="2d749d543c294b1a2dd8f01a3b77b14cba82efc73fc6efa7214dec5fb9278949" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.805934 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.808495 4958 generic.go:334] "Generic (PLEG): container finished" podID="90fd75a5-0719-4f7b-9103-d76319815535" containerID="411c6d0db58b91660709b05a5f2cc376c27a480550a9418f37b91c7bf4306c89" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.812013 4958 generic.go:334] "Generic (PLEG): container finished" podID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerID="39c3f98bf3ab5af00e017cc7b7b4cbeb1fef7cb815739f4da85c995009b67708" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.815186 4958 generic.go:334] "Generic (PLEG): container finished" podID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerID="7226ee00569bafdcf12786ebdd5af9f03b77506f57b10ab6b33eedc12be1ca77" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.819093 4958 generic.go:334] "Generic (PLEG): container finished" podID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerID="f3502b89ff3b2147ee78b29363381741d58de95e5a89ce1f3ff1cff5bce4545a" exitCode=0 Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.835633 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.835655 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e48003-108c-4de3-be7e-81946556e25e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.838963 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-szq25"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdf478498-ptdth" event={"ID":"94872b33-329c-42ca-9d90-09c6950dfd83","Type":"ContainerDied","Data":"9c1c2c4508d314f052c9515beb0473192e5c3aa71f3ed2c51ea5f440955efe26"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839025 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-f72ls"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839055 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-f72ls"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839065 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican0295-account-delete-dzz74" event={"ID":"b562d3e4-572e-48d5-9257-4927ef68e988","Type":"ContainerDied","Data":"c7b235602138d292f67c3a263ba49d70aa855922ef5ff4426599539893cd5017"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839093 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b235602138d292f67c3a263ba49d70aa855922ef5ff4426599539893cd5017" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839102 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-56b6f8956c-65c6t"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839130 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5vd9f"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839142 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5vd9f"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839154 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4303db42-6842-4d26-bf89-79755d0db57d","Type":"ContainerDied","Data":"cc400959116b34b9a987bb5b45ad7f715a7f6d889a0bb90cd853ee49a2f5a81c"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839168 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0e08-account-create-c2f62"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839178 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07431828-0ed3-42a8-9c9c-fdcdb98c854b","Type":"ContainerDied","Data":"5a4d814284dc7005c70da55ad6372b1b9b0faa25b4caef46944a4c4b825edf42"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6696f545c5-2j7vj" event={"ID":"7c32a9e6-d21d-422f-914d-c3e9de16a0d5","Type":"ContainerDied","Data":"b6ff834994cbc43b81674b98260489f9066c39711a5a82c7bd7438c8b37947ac"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839219 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0e08-account-create-c2f62"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-687454697b-jdsn4" event={"ID":"70e48003-108c-4de3-be7e-81946556e25e","Type":"ContainerDied","Data":"5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839246 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-687454697b-jdsn4" event={"ID":"70e48003-108c-4de3-be7e-81946556e25e","Type":"ContainerDied","Data":"10568575f0b58fa5f34d60aeb42834af3e40b9b4a8a3c21387d8e4e4a712b1c9"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839258 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerDied","Data":"0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839309 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" event={"ID":"b6c32c16-960f-4a65-abd5-f435d16932f0","Type":"ContainerDied","Data":"896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" event={"ID":"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef","Type":"ContainerDied","Data":"2d749d543c294b1a2dd8f01a3b77b14cba82efc73fc6efa7214dec5fb9278949"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839334 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2ac8efb-5e1d-4b4c-beba-7d287a699044","Type":"ContainerDied","Data":"4bcc3d48d5eaf51115d6c76633d0e9f1325d758b2beef62c6bf04e4aa468c0cc"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839345 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90fd75a5-0719-4f7b-9103-d76319815535","Type":"ContainerDied","Data":"411c6d0db58b91660709b05a5f2cc376c27a480550a9418f37b91c7bf4306c89"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"234cbd06-f8de-4d4f-a510-8dc7e5d9db93","Type":"ContainerDied","Data":"39c3f98bf3ab5af00e017cc7b7b4cbeb1fef7cb815739f4da85c995009b67708"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" event={"ID":"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac","Type":"ContainerDied","Data":"7226ee00569bafdcf12786ebdd5af9f03b77506f57b10ab6b33eedc12be1ca77"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65112aa3-67bb-47a4-bc56-241ce61eff7b","Type":"ContainerDied","Data":"f3502b89ff3b2147ee78b29363381741d58de95e5a89ce1f3ff1cff5bce4545a"} Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.839620 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-56b6f8956c-65c6t" podUID="46f633ea-236e-46e7-a780-a9912bbd2c91" containerName="keystone-api" containerID="cri-o://86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024" gracePeriod=30 Oct 08 06:56:43 crc kubenswrapper[4958]: E1008 06:56:43.868265 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.876430 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:56:43 crc kubenswrapper[4958]: E1008 06:56:43.881172 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.885793 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.890498 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.891118 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:56:43 crc kubenswrapper[4958]: E1008 06:56:43.908135 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:43 crc kubenswrapper[4958]: E1008 06:56:43.908207 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" containerName="nova-cell1-conductor-conductor" Oct 08 06:56:43 crc kubenswrapper[4958]: I1008 06:56:43.937683 4958 scope.go:117] "RemoveContainer" containerID="fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.016251 4958 scope.go:117] "RemoveContainer" containerID="fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.017924 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909\": container with ID starting with fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909 not found: ID does not exist" containerID="fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.017977 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909"} err="failed to get container status \"fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909\": rpc error: code = NotFound desc = could not find container \"fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909\": container with ID starting with fe6d544902f7ec4ed1b5984714c6ba552c031c089d1c278193d1e555aed7c909 not found: ID does not exist" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.018005 4958 scope.go:117] "RemoveContainer" containerID="5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.029383 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.039785 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-combined-ca-bundle\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.039814 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.039848 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-logs\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.039889 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-scripts\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.039910 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-httpd-run\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.039936 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2cl\" (UniqueName: \"kubernetes.io/projected/07431828-0ed3-42a8-9c9c-fdcdb98c854b-kube-api-access-ll2cl\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-httpd-run\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040040 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-internal-tls-certs\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-public-tls-certs\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040139 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-combined-ca-bundle\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040166 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-logs\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040219 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqct4\" (UniqueName: \"kubernetes.io/projected/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-kube-api-access-zqct4\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040234 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-config-data\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040252 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\" (UID: \"234cbd06-f8de-4d4f-a510-8dc7e5d9db93\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040268 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-scripts\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.040293 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-config-data\") pod \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\" (UID: \"07431828-0ed3-42a8-9c9c-fdcdb98c854b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.041921 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-logs" (OuterVolumeSpecName: "logs") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.041933 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.044702 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.047796 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-logs" (OuterVolumeSpecName: "logs") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.065026 4958 scope.go:117] "RemoveContainer" containerID="d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.070449 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.076595 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.076662 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-kube-api-access-zqct4" (OuterVolumeSpecName: "kube-api-access-zqct4") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "kube-api-access-zqct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.076738 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-scripts" (OuterVolumeSpecName: "scripts") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.076840 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-scripts" (OuterVolumeSpecName: "scripts") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.076938 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07431828-0ed3-42a8-9c9c-fdcdb98c854b-kube-api-access-ll2cl" (OuterVolumeSpecName: "kube-api-access-ll2cl") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "kube-api-access-ll2cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.077008 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.085633 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.101881 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-687454697b-jdsn4"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.106552 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.112086 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.112488 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerName="galera" containerID="cri-o://6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328" gracePeriod=30 Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.114565 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-687454697b-jdsn4"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.124104 4958 scope.go:117] "RemoveContainer" containerID="5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.125351 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc\": container with ID starting with 5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc not found: ID does not exist" containerID="5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.125378 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc"} err="failed to get container status \"5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc\": rpc error: code = NotFound desc = could not find container \"5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc\": container with ID starting with 5b4a59a5f7991c7aec4f2d6b2e9211ada14c55e3e910aa8c4dafe5d00668f2bc not found: ID does not exist" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.125397 4958 scope.go:117] "RemoveContainer" containerID="d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.125769 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722\": container with ID starting with d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722 not found: ID does not exist" containerID="d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.125785 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722"} err="failed to get container status \"d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722\": rpc error: code = NotFound desc = could not find container \"d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722\": container with ID starting with d326bcdc417c5ccf5500a143fe2cd2f36595d7ee749c8b67463e12eac441d722 not found: ID does not exist" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.125797 4958 scope.go:117] "RemoveContainer" containerID="ab70053ebb2c4509de5852c1600f0da9043c9b356dde235137ef97c1e51aebf6" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.128899 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.131442 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-config-data" (OuterVolumeSpecName: "config-data") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.141486 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-internal-tls-certs\") pod \"65112aa3-67bb-47a4-bc56-241ce61eff7b\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.141554 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-combined-ca-bundle\") pod \"65112aa3-67bb-47a4-bc56-241ce61eff7b\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.141579 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-config-data\") pod \"65112aa3-67bb-47a4-bc56-241ce61eff7b\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.141666 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-public-tls-certs\") pod \"65112aa3-67bb-47a4-bc56-241ce61eff7b\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.141687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bt5d\" (UniqueName: \"kubernetes.io/projected/65112aa3-67bb-47a4-bc56-241ce61eff7b-kube-api-access-6bt5d\") pod \"65112aa3-67bb-47a4-bc56-241ce61eff7b\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.141756 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65112aa3-67bb-47a4-bc56-241ce61eff7b-logs\") pod \"65112aa3-67bb-47a4-bc56-241ce61eff7b\" (UID: \"65112aa3-67bb-47a4-bc56-241ce61eff7b\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148736 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148762 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148770 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148780 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2cl\" (UniqueName: \"kubernetes.io/projected/07431828-0ed3-42a8-9c9c-fdcdb98c854b-kube-api-access-ll2cl\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148789 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148797 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148805 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07431828-0ed3-42a8-9c9c-fdcdb98c854b-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148813 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqct4\" (UniqueName: \"kubernetes.io/projected/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-kube-api-access-zqct4\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148822 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148861 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148870 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148879 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.148893 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.166559 4958 scope.go:117] "RemoveContainer" containerID="1ad480ebf1447ef4dce8f921ab07bf404169d88c3b6f87c0b44604976474fd46" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.170764 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65112aa3-67bb-47a4-bc56-241ce61eff7b-logs" (OuterVolumeSpecName: "logs") pod "65112aa3-67bb-47a4-bc56-241ce61eff7b" (UID: "65112aa3-67bb-47a4-bc56-241ce61eff7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.196587 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.197154 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.204180 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65112aa3-67bb-47a4-bc56-241ce61eff7b-kube-api-access-6bt5d" (OuterVolumeSpecName: "kube-api-access-6bt5d") pod "65112aa3-67bb-47a4-bc56-241ce61eff7b" (UID: "65112aa3-67bb-47a4-bc56-241ce61eff7b"). InnerVolumeSpecName "kube-api-access-6bt5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.215398 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.219103 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65112aa3-67bb-47a4-bc56-241ce61eff7b" (UID: "65112aa3-67bb-47a4-bc56-241ce61eff7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.219158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-config-data" (OuterVolumeSpecName: "config-data") pod "65112aa3-67bb-47a4-bc56-241ce61eff7b" (UID: "65112aa3-67bb-47a4-bc56-241ce61eff7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.225397 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "234cbd06-f8de-4d4f-a510-8dc7e5d9db93" (UID: "234cbd06-f8de-4d4f-a510-8dc7e5d9db93"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.227523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-config-data" (OuterVolumeSpecName: "config-data") pod "07431828-0ed3-42a8-9c9c-fdcdb98c854b" (UID: "07431828-0ed3-42a8-9c9c-fdcdb98c854b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.250121 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94872b33-329c-42ca-9d90-09c6950dfd83-logs\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.250159 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-nova-metadata-tls-certs\") pod \"90fd75a5-0719-4f7b-9103-d76319815535\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.250200 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.252760 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fd75a5-0719-4f7b-9103-d76319815535-logs\") pod \"90fd75a5-0719-4f7b-9103-d76319815535\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.253741 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxwx\" (UniqueName: \"kubernetes.io/projected/90fd75a5-0719-4f7b-9103-d76319815535-kube-api-access-mfxwx\") pod \"90fd75a5-0719-4f7b-9103-d76319815535\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.253766 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-internal-tls-certs\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.253818 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data-custom\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.253899 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-public-tls-certs\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.253993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-combined-ca-bundle\") pod \"90fd75a5-0719-4f7b-9103-d76319815535\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxhcn\" (UniqueName: \"kubernetes.io/projected/94872b33-329c-42ca-9d90-09c6950dfd83-kube-api-access-zxhcn\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254079 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-config-data\") pod \"90fd75a5-0719-4f7b-9103-d76319815535\" (UID: \"90fd75a5-0719-4f7b-9103-d76319815535\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254101 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-combined-ca-bundle\") pod \"94872b33-329c-42ca-9d90-09c6950dfd83\" (UID: \"94872b33-329c-42ca-9d90-09c6950dfd83\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254736 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254750 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254759 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254769 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254776 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/234cbd06-f8de-4d4f-a510-8dc7e5d9db93-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254786 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bt5d\" (UniqueName: \"kubernetes.io/projected/65112aa3-67bb-47a4-bc56-241ce61eff7b-kube-api-access-6bt5d\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254797 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07431828-0ed3-42a8-9c9c-fdcdb98c854b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254806 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65112aa3-67bb-47a4-bc56-241ce61eff7b-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.254816 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.260213 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94872b33-329c-42ca-9d90-09c6950dfd83-logs" (OuterVolumeSpecName: "logs") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.260515 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fd75a5-0719-4f7b-9103-d76319815535-logs" (OuterVolumeSpecName: "logs") pod "90fd75a5-0719-4f7b-9103-d76319815535" (UID: "90fd75a5-0719-4f7b-9103-d76319815535"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.266636 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.277476 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.288717 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.289606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "65112aa3-67bb-47a4-bc56-241ce61eff7b" (UID: "65112aa3-67bb-47a4-bc56-241ce61eff7b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.291395 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fd75a5-0719-4f7b-9103-d76319815535-kube-api-access-mfxwx" (OuterVolumeSpecName: "kube-api-access-mfxwx") pod "90fd75a5-0719-4f7b-9103-d76319815535" (UID: "90fd75a5-0719-4f7b-9103-d76319815535"). InnerVolumeSpecName "kube-api-access-mfxwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.297087 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.297801 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94872b33-329c-42ca-9d90-09c6950dfd83-kube-api-access-zxhcn" (OuterVolumeSpecName: "kube-api-access-zxhcn") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "kube-api-access-zxhcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.314162 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.314449 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90fd75a5-0719-4f7b-9103-d76319815535" (UID: "90fd75a5-0719-4f7b-9103-d76319815535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.328367 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.333207 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-config-data" (OuterVolumeSpecName: "config-data") pod "90fd75a5-0719-4f7b-9103-d76319815535" (UID: "90fd75a5-0719-4f7b-9103-d76319815535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.334772 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "65112aa3-67bb-47a4-bc56-241ce61eff7b" (UID: "65112aa3-67bb-47a4-bc56-241ce61eff7b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.357422 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358407 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358425 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94872b33-329c-42ca-9d90-09c6950dfd83-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358440 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90fd75a5-0719-4f7b-9103-d76319815535-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358451 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358462 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxwx\" (UniqueName: \"kubernetes.io/projected/90fd75a5-0719-4f7b-9103-d76319815535-kube-api-access-mfxwx\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358473 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358482 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65112aa3-67bb-47a4-bc56-241ce61eff7b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358493 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.358503 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxhcn\" (UniqueName: \"kubernetes.io/projected/94872b33-329c-42ca-9d90-09c6950dfd83-kube-api-access-zxhcn\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.377247 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.378760 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.424564 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data" (OuterVolumeSpecName: "config-data") pod "94872b33-329c-42ca-9d90-09c6950dfd83" (UID: "94872b33-329c-42ca-9d90-09c6950dfd83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.445117 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "90fd75a5-0719-4f7b-9103-d76319815535" (UID: "90fd75a5-0719-4f7b-9103-d76319815535"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473138 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-logs\") pod \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473219 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-certs\") pod \"4303db42-6842-4d26-bf89-79755d0db57d\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473278 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data-custom\") pod \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data-custom\") pod \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473350 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxpsp\" (UniqueName: \"kubernetes.io/projected/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-kube-api-access-nxpsp\") pod \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473369 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data\") pod \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473400 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6xnv\" (UniqueName: \"kubernetes.io/projected/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-kube-api-access-w6xnv\") pod \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473478 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj2h6\" (UniqueName: \"kubernetes.io/projected/4303db42-6842-4d26-bf89-79755d0db57d-kube-api-access-zj2h6\") pod \"4303db42-6842-4d26-bf89-79755d0db57d\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473502 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-logs\") pod \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473524 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl8gs\" (UniqueName: \"kubernetes.io/projected/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-kube-api-access-cl8gs\") pod \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473587 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data\") pod \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.473837 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-logs" (OuterVolumeSpecName: "logs") pod "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" (UID: "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.473907 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474189 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data\") pod \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data-custom\") pod \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474267 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-config\") pod \"4303db42-6842-4d26-bf89-79755d0db57d\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-combined-ca-bundle\") pod \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-combined-ca-bundle\") pod \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\" (UID: \"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474414 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-combined-ca-bundle\") pod \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\" (UID: \"7c32a9e6-d21d-422f-914d-c3e9de16a0d5\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474491 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-logs\") pod \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\" (UID: \"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.474523 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-combined-ca-bundle\") pod \"4303db42-6842-4d26-bf89-79755d0db57d\" (UID: \"4303db42-6842-4d26-bf89-79755d0db57d\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.475141 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.475163 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.475175 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/90fd75a5-0719-4f7b-9103-d76319815535-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.475188 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94872b33-329c-42ca-9d90-09c6950dfd83-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.475199 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.475347 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.481701 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" (UID: "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.481812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" (UID: "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.482140 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.482434 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.482469 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.483263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-logs" (OuterVolumeSpecName: "logs") pod "7c32a9e6-d21d-422f-914d-c3e9de16a0d5" (UID: "7c32a9e6-d21d-422f-914d-c3e9de16a0d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.491013 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.493362 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-kube-api-access-nxpsp" (OuterVolumeSpecName: "kube-api-access-nxpsp") pod "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" (UID: "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac"). InnerVolumeSpecName "kube-api-access-nxpsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.494090 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-logs" (OuterVolumeSpecName: "logs") pod "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" (UID: "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.496477 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-kube-api-access-cl8gs" (OuterVolumeSpecName: "kube-api-access-cl8gs") pod "7c32a9e6-d21d-422f-914d-c3e9de16a0d5" (UID: "7c32a9e6-d21d-422f-914d-c3e9de16a0d5"). InnerVolumeSpecName "kube-api-access-cl8gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.496804 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4303db42-6842-4d26-bf89-79755d0db57d-kube-api-access-zj2h6" (OuterVolumeSpecName: "kube-api-access-zj2h6") pod "4303db42-6842-4d26-bf89-79755d0db57d" (UID: "4303db42-6842-4d26-bf89-79755d0db57d"). InnerVolumeSpecName "kube-api-access-zj2h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.501879 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:44 crc kubenswrapper[4958]: E1008 06:56:44.501957 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.508145 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c32a9e6-d21d-422f-914d-c3e9de16a0d5" (UID: "7c32a9e6-d21d-422f-914d-c3e9de16a0d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.508730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-kube-api-access-w6xnv" (OuterVolumeSpecName: "kube-api-access-w6xnv") pod "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" (UID: "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef"). InnerVolumeSpecName "kube-api-access-w6xnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.546302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" (UID: "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.552006 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" (UID: "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.552367 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "4303db42-6842-4d26-bf89-79755d0db57d" (UID: "4303db42-6842-4d26-bf89-79755d0db57d"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.560195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data" (OuterVolumeSpecName: "config-data") pod "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" (UID: "7b4974ca-181d-4e2e-b4c4-0c425f86f0ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.562565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4303db42-6842-4d26-bf89-79755d0db57d" (UID: "4303db42-6842-4d26-bf89-79755d0db57d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.565754 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c32a9e6-d21d-422f-914d-c3e9de16a0d5" (UID: "7c32a9e6-d21d-422f-914d-c3e9de16a0d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576410 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576438 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576448 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxpsp\" (UniqueName: \"kubernetes.io/projected/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-kube-api-access-nxpsp\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576459 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576468 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6xnv\" (UniqueName: \"kubernetes.io/projected/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-kube-api-access-w6xnv\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576476 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj2h6\" (UniqueName: \"kubernetes.io/projected/4303db42-6842-4d26-bf89-79755d0db57d-kube-api-access-zj2h6\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576485 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576495 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl8gs\" (UniqueName: \"kubernetes.io/projected/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-kube-api-access-cl8gs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576503 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576511 4958 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576519 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576528 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576536 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576543 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.576552 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.579874 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data" (OuterVolumeSpecName: "config-data") pod "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" (UID: "6df9f64e-4e5a-4dca-87d5-530e1c19a9ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.581046 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "4303db42-6842-4d26-bf89-79755d0db57d" (UID: "4303db42-6842-4d26-bf89-79755d0db57d"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.592018 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data" (OuterVolumeSpecName: "config-data") pod "7c32a9e6-d21d-422f-914d-c3e9de16a0d5" (UID: "7c32a9e6-d21d-422f-914d-c3e9de16a0d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.685084 4958 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4303db42-6842-4d26-bf89-79755d0db57d-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.685120 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c32a9e6-d21d-422f-914d-c3e9de16a0d5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.685130 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.714255 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.719915 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.832979 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"65112aa3-67bb-47a4-bc56-241ce61eff7b","Type":"ContainerDied","Data":"c32f55c6ea04dd844cd87c1704e08913cb980fd50824fc16a2fe3e958f374b16"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.833022 4958 scope.go:117] "RemoveContainer" containerID="f3502b89ff3b2147ee78b29363381741d58de95e5a89ce1f3ff1cff5bce4545a" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.833135 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.837405 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapic226-account-delete-g5pmv" event={"ID":"c32b7167-96c2-4cf2-b330-54562c181940","Type":"ContainerDied","Data":"b388821f7277929e58f425f76d06c1e0ec86c6cd27c0b568206c440e70db5d5e"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.837467 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b388821f7277929e58f425f76d06c1e0ec86c6cd27c0b568206c440e70db5d5e" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.837548 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapic226-account-delete-g5pmv" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.849334 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.849337 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c765759f8-jn9mj" event={"ID":"6df9f64e-4e5a-4dca-87d5-530e1c19a9ac","Type":"ContainerDied","Data":"6ec6f885aec1d149f7260ebd55345ceb13bb6824b8f1a6673777f4e84a2f5fa1"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.860920 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.861075 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4303db42-6842-4d26-bf89-79755d0db57d","Type":"ContainerDied","Data":"878ba928bfeb9ac924c7cb4ec028ff8adf612df4c52b8bad0a1048afd8a3f0fc"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.864140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" event={"ID":"7b4974ca-181d-4e2e-b4c4-0c425f86f0ef","Type":"ContainerDied","Data":"f24ff3efb02a956b2b0b80775acde8aebc8914bd462b7c1ac7311ddbb3d6b15f"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.864282 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cdccb56ff-v4lgm" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.864928 4958 scope.go:117] "RemoveContainer" containerID="3847c1dff68faa7fdc2dc75dc40fe444ac629a5d89701750bf8d511d58fa9aba" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.867564 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.870198 4958 generic.go:334] "Generic (PLEG): container finished" podID="312426b0-8fb6-48ad-ba99-79b87cfcac38" containerID="77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59" exitCode=0 Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.870311 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.870406 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"312426b0-8fb6-48ad-ba99-79b87cfcac38","Type":"ContainerDied","Data":"77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.870438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"312426b0-8fb6-48ad-ba99-79b87cfcac38","Type":"ContainerDied","Data":"245d3bf5624e88af094989424304b1fc981976ce6ea56d3e8ca03473ad6bfea9"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.880996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90fd75a5-0719-4f7b-9103-d76319815535","Type":"ContainerDied","Data":"3a4580bbed7bf57f153774bedb6f6f8a2a408b99e2cfc35c99ee7f6314b6b499"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.881113 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889332 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmnc\" (UniqueName: \"kubernetes.io/projected/c32b7167-96c2-4cf2-b330-54562c181940-kube-api-access-pcmnc\") pod \"c32b7167-96c2-4cf2-b330-54562c181940\" (UID: \"c32b7167-96c2-4cf2-b330-54562c181940\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889570 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54s4p\" (UniqueName: \"kubernetes.io/projected/312426b0-8fb6-48ad-ba99-79b87cfcac38-kube-api-access-54s4p\") pod \"312426b0-8fb6-48ad-ba99-79b87cfcac38\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889614 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-kolla-config\") pod \"312426b0-8fb6-48ad-ba99-79b87cfcac38\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889707 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-combined-ca-bundle\") pod \"312426b0-8fb6-48ad-ba99-79b87cfcac38\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-config-data\") pod \"312426b0-8fb6-48ad-ba99-79b87cfcac38\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.889816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-memcached-tls-certs\") pod \"312426b0-8fb6-48ad-ba99-79b87cfcac38\" (UID: \"312426b0-8fb6-48ad-ba99-79b87cfcac38\") " Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.890774 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "312426b0-8fb6-48ad-ba99-79b87cfcac38" (UID: "312426b0-8fb6-48ad-ba99-79b87cfcac38"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.894934 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-config-data" (OuterVolumeSpecName: "config-data") pod "312426b0-8fb6-48ad-ba99-79b87cfcac38" (UID: "312426b0-8fb6-48ad-ba99-79b87cfcac38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.898225 4958 scope.go:117] "RemoveContainer" containerID="7226ee00569bafdcf12786ebdd5af9f03b77506f57b10ab6b33eedc12be1ca77" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.901394 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32b7167-96c2-4cf2-b330-54562c181940-kube-api-access-pcmnc" (OuterVolumeSpecName: "kube-api-access-pcmnc") pod "c32b7167-96c2-4cf2-b330-54562c181940" (UID: "c32b7167-96c2-4cf2-b330-54562c181940"). InnerVolumeSpecName "kube-api-access-pcmnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.904057 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c765759f8-jn9mj"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.912588 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312426b0-8fb6-48ad-ba99-79b87cfcac38-kube-api-access-54s4p" (OuterVolumeSpecName: "kube-api-access-54s4p") pod "312426b0-8fb6-48ad-ba99-79b87cfcac38" (UID: "312426b0-8fb6-48ad-ba99-79b87cfcac38"). InnerVolumeSpecName "kube-api-access-54s4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.912802 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-c765759f8-jn9mj"] Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.912840 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07431828-0ed3-42a8-9c9c-fdcdb98c854b","Type":"ContainerDied","Data":"8f79829d96326f7bee84b7b65553b5f2f299f87cce7df41f27f19b7a751a63c3"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.912889 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.922639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6696f545c5-2j7vj" event={"ID":"7c32a9e6-d21d-422f-914d-c3e9de16a0d5","Type":"ContainerDied","Data":"73cd28951909d407d4f77bb645d10d3b811f295b0644678badce6f10f5e9cf0d"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.922715 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6696f545c5-2j7vj" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.935718 4958 generic.go:334] "Generic (PLEG): container finished" podID="90612def-876b-4ae6-88e6-7f3de02515e6" containerID="ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e" exitCode=0 Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.935745 4958 generic.go:334] "Generic (PLEG): container finished" podID="90612def-876b-4ae6-88e6-7f3de02515e6" containerID="2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a" exitCode=0 Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.935783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerDied","Data":"ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.935807 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerDied","Data":"2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.949667 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "312426b0-8fb6-48ad-ba99-79b87cfcac38" (UID: "312426b0-8fb6-48ad-ba99-79b87cfcac38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.950091 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "312426b0-8fb6-48ad-ba99-79b87cfcac38" (UID: "312426b0-8fb6-48ad-ba99-79b87cfcac38"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.953866 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.953933 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"234cbd06-f8de-4d4f-a510-8dc7e5d9db93","Type":"ContainerDied","Data":"43338728ca294bc10d507b106937cc8f96e3ca8a8078ebaf05c25dab25560170"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.958712 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdf478498-ptdth" event={"ID":"94872b33-329c-42ca-9d90-09c6950dfd83","Type":"ContainerDied","Data":"687f160eae8e8bb0dfb2b6f75afd851bc934639225e56673ec6bd79295110d10"} Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.958889 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdf478498-ptdth" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.991540 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcmnc\" (UniqueName: \"kubernetes.io/projected/c32b7167-96c2-4cf2-b330-54562c181940-kube-api-access-pcmnc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.991569 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54s4p\" (UniqueName: \"kubernetes.io/projected/312426b0-8fb6-48ad-ba99-79b87cfcac38-kube-api-access-54s4p\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.991579 4958 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.991587 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.991596 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/312426b0-8fb6-48ad-ba99-79b87cfcac38-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:44 crc kubenswrapper[4958]: I1008 06:56:44.991604 4958 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/312426b0-8fb6-48ad-ba99-79b87cfcac38-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.019207 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6cdccb56ff-v4lgm"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.022639 4958 scope.go:117] "RemoveContainer" containerID="1350e4ea881e68f6fe98606268360098bdeb78fb0b42909468424428303eff5c" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.025495 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6cdccb56ff-v4lgm"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.052302 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.056215 4958 scope.go:117] "RemoveContainer" containerID="cc400959116b34b9a987bb5b45ad7f715a7f6d889a0bb90cd853ee49a2f5a81c" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.082304 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.099806 4958 scope.go:117] "RemoveContainer" containerID="2d749d543c294b1a2dd8f01a3b77b14cba82efc73fc6efa7214dec5fb9278949" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.107010 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.119233 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.135167 4958 scope.go:117] "RemoveContainer" containerID="87bccf6b8df0d8973019540e9aeef4bc4a3f9ababf913b986323b73b1f2f4a3b" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.137780 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.149433 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.158090 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.161461 4958 scope.go:117] "RemoveContainer" containerID="77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.169215 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.183250 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6696f545c5-2j7vj"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.189147 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6696f545c5-2j7vj"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.193416 4958 scope.go:117] "RemoveContainer" containerID="77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59" Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.193714 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59\": container with ID starting with 77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59 not found: ID does not exist" containerID="77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.193737 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59"} err="failed to get container status \"77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59\": rpc error: code = NotFound desc = could not find container \"77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59\": container with ID starting with 77318adadf422409354e9c4908a8759e6e1b4aa095b2b301bb9919e15d6f3a59 not found: ID does not exist" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.193763 4958 scope.go:117] "RemoveContainer" containerID="411c6d0db58b91660709b05a5f2cc376c27a480550a9418f37b91c7bf4306c89" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.195905 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cdf478498-ptdth"] Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.196426 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.196583 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data podName:8f931d71-9f8f-4755-a793-ca326e423199 nodeName:}" failed. No retries permitted until 2025-10-08 06:56:53.196461788 +0000 UTC m=+1356.326154389 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data") pod "rabbitmq-server-0" (UID: "8f931d71-9f8f-4755-a793-ca326e423199") : configmap "rabbitmq-config-data" not found Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.202263 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cdf478498-ptdth"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.208174 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapic226-account-delete-g5pmv"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.212361 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapic226-account-delete-g5pmv"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.216648 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.220603 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.236909 4958 scope.go:117] "RemoveContainer" containerID="0416399ef594017585c375dbf306aaebef9247c5c069a91faaad5a921511a654" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.270383 4958 scope.go:117] "RemoveContainer" containerID="5a4d814284dc7005c70da55ad6372b1b9b0faa25b4caef46944a4c4b825edf42" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.296813 4958 scope.go:117] "RemoveContainer" containerID="8880871fe8295b852d41b427ce6abdf7eaf0a061647089a2c0dbf322aafca26b" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.326983 4958 scope.go:117] "RemoveContainer" containerID="b6ff834994cbc43b81674b98260489f9066c39711a5a82c7bd7438c8b37947ac" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.353895 4958 scope.go:117] "RemoveContainer" containerID="9559dab0d3aabca4257ee3c9f1ea0fa80a40d4c8646a55e6458c39e238f7651c" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.379213 4958 scope.go:117] "RemoveContainer" containerID="39c3f98bf3ab5af00e017cc7b7b4cbeb1fef7cb815739f4da85c995009b67708" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.403652 4958 scope.go:117] "RemoveContainer" containerID="d76598e60d3a3646bdbc321f562f55f8c8d51c2d903f6c39ad73774883cc7ec3" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.420847 4958 scope.go:117] "RemoveContainer" containerID="9c1c2c4508d314f052c9515beb0473192e5c3aa71f3ed2c51ea5f440955efe26" Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.429601 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.431093 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.433082 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 06:56:45 crc kubenswrapper[4958]: E1008 06:56:45.433117 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerName="nova-cell0-conductor-conductor" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.454687 4958 scope.go:117] "RemoveContainer" containerID="8706ba2579f385903f00ba5a57a567212e79efde0f18b7f2408885add9cf0427" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.590702 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" path="/var/lib/kubelet/pods/07431828-0ed3-42a8-9c9c-fdcdb98c854b/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.594408 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" path="/var/lib/kubelet/pods/234cbd06-f8de-4d4f-a510-8dc7e5d9db93/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.595117 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312426b0-8fb6-48ad-ba99-79b87cfcac38" path="/var/lib/kubelet/pods/312426b0-8fb6-48ad-ba99-79b87cfcac38/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.596060 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3135b952-56ad-4f48-a839-632adc6b8856" path="/var/lib/kubelet/pods/3135b952-56ad-4f48-a839-632adc6b8856/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.601273 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4303db42-6842-4d26-bf89-79755d0db57d" path="/var/lib/kubelet/pods/4303db42-6842-4d26-bf89-79755d0db57d/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.601868 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afa2864-e8e7-4789-b6d1-e1608724bcff" path="/var/lib/kubelet/pods/4afa2864-e8e7-4789-b6d1-e1608724bcff/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.602936 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" path="/var/lib/kubelet/pods/65112aa3-67bb-47a4-bc56-241ce61eff7b/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.603617 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" path="/var/lib/kubelet/pods/6df9f64e-4e5a-4dca-87d5-530e1c19a9ac/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.604206 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e48003-108c-4de3-be7e-81946556e25e" path="/var/lib/kubelet/pods/70e48003-108c-4de3-be7e-81946556e25e/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.605375 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" path="/var/lib/kubelet/pods/7b4974ca-181d-4e2e-b4c4-0c425f86f0ef/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.606113 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" path="/var/lib/kubelet/pods/7c32a9e6-d21d-422f-914d-c3e9de16a0d5/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.607245 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fd75a5-0719-4f7b-9103-d76319815535" path="/var/lib/kubelet/pods/90fd75a5-0719-4f7b-9103-d76319815535/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.607857 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" path="/var/lib/kubelet/pods/94872b33-329c-42ca-9d90-09c6950dfd83/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.608464 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7107907-afbb-4fe5-86d0-a6b7bd31f0eb" path="/var/lib/kubelet/pods/a7107907-afbb-4fe5-86d0-a6b7bd31f0eb/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.609881 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae41a8dc-cd5b-4b78-a03f-557d76949983" path="/var/lib/kubelet/pods/ae41a8dc-cd5b-4b78-a03f-557d76949983/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.610593 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" path="/var/lib/kubelet/pods/b2ac8efb-5e1d-4b4c-beba-7d287a699044/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.611910 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32b7167-96c2-4cf2-b330-54562c181940" path="/var/lib/kubelet/pods/c32b7167-96c2-4cf2-b330-54562c181940/volumes" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.668498 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fe1279cb-5369-4347-9fc9-d598103536a9/ovn-northd/0.log" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.668806 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.707758 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-scripts\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.708058 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-northd-tls-certs\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.708425 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-config\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.708493 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-combined-ca-bundle\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.708562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh92x\" (UniqueName: \"kubernetes.io/projected/fe1279cb-5369-4347-9fc9-d598103536a9-kube-api-access-zh92x\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.708648 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-metrics-certs-tls-certs\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.708742 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-rundir\") pod \"fe1279cb-5369-4347-9fc9-d598103536a9\" (UID: \"fe1279cb-5369-4347-9fc9-d598103536a9\") " Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.709264 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-scripts" (OuterVolumeSpecName: "scripts") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.709492 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.710427 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-config" (OuterVolumeSpecName: "config") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.723537 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1279cb-5369-4347-9fc9-d598103536a9-kube-api-access-zh92x" (OuterVolumeSpecName: "kube-api-access-zh92x") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "kube-api-access-zh92x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.749390 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.782476 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.797094 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "fe1279cb-5369-4347-9fc9-d598103536a9" (UID: "fe1279cb-5369-4347-9fc9-d598103536a9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810389 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810419 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810432 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810444 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh92x\" (UniqueName: \"kubernetes.io/projected/fe1279cb-5369-4347-9fc9-d598103536a9-kube-api-access-zh92x\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810457 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1279cb-5369-4347-9fc9-d598103536a9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810468 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe1279cb-5369-4347-9fc9-d598103536a9-ovn-rundir\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.810479 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe1279cb-5369-4347-9fc9-d598103536a9-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.970028 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fe1279cb-5369-4347-9fc9-d598103536a9/ovn-northd/0.log" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.970074 4958 generic.go:334] "Generic (PLEG): container finished" podID="fe1279cb-5369-4347-9fc9-d598103536a9" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" exitCode=139 Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.970123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe1279cb-5369-4347-9fc9-d598103536a9","Type":"ContainerDied","Data":"1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b"} Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.970151 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe1279cb-5369-4347-9fc9-d598103536a9","Type":"ContainerDied","Data":"40764b3ce1a4474cb80450624f247364d57e0d77b952d3fae60f44503319b525"} Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.970169 4958 scope.go:117] "RemoveContainer" containerID="758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538" Oct 08 06:56:45 crc kubenswrapper[4958]: I1008 06:56:45.970261 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.041165 4958 scope.go:117] "RemoveContainer" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.056527 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.062889 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.106811 4958 scope.go:117] "RemoveContainer" containerID="758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538" Oct 08 06:56:46 crc kubenswrapper[4958]: E1008 06:56:46.107531 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538\": container with ID starting with 758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538 not found: ID does not exist" containerID="758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.107571 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538"} err="failed to get container status \"758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538\": rpc error: code = NotFound desc = could not find container \"758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538\": container with ID starting with 758f90af9edd2932102a3b7f9d7f4706f38591020b8ef2fec8d63a600cbdb538 not found: ID does not exist" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.107598 4958 scope.go:117] "RemoveContainer" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" Oct 08 06:56:46 crc kubenswrapper[4958]: E1008 06:56:46.108027 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b\": container with ID starting with 1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b not found: ID does not exist" containerID="1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.108099 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b"} err="failed to get container status \"1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b\": rpc error: code = NotFound desc = could not find container \"1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b\": container with ID starting with 1fc103c4f9467bee2f31e560f3fbf23b74016f8ce066004f12d85b1eefb8f82b not found: ID does not exist" Oct 08 06:56:46 crc kubenswrapper[4958]: E1008 06:56:46.317581 4958 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:46 crc kubenswrapper[4958]: E1008 06:56:46.317837 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data podName:442b1534-27bc-4d6d-be46-1ea5689c290f nodeName:}" failed. No retries permitted until 2025-10-08 06:56:54.317824221 +0000 UTC m=+1357.447516822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data") pod "rabbitmq-cell1-server-0" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f") : configmap "rabbitmq-cell1-config-data" not found Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.373174 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.520831 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-operator-scripts\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.520923 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-generated\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.520967 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-galera-tls-certs\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521007 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7sfv\" (UniqueName: \"kubernetes.io/projected/5afba053-ce3d-4e27-a16f-35dff8f0407c-kube-api-access-j7sfv\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521040 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-kolla-config\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521064 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521085 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-combined-ca-bundle\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521100 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-default\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-secrets\") pod \"5afba053-ce3d-4e27-a16f-35dff8f0407c\" (UID: \"5afba053-ce3d-4e27-a16f-35dff8f0407c\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521378 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.521448 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.522237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.523882 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.525165 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-secrets" (OuterVolumeSpecName: "secrets") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.525238 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afba053-ce3d-4e27-a16f-35dff8f0407c-kube-api-access-j7sfv" (OuterVolumeSpecName: "kube-api-access-j7sfv") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "kube-api-access-j7sfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.533120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.547453 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.560756 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "5afba053-ce3d-4e27-a16f-35dff8f0407c" (UID: "5afba053-ce3d-4e27-a16f-35dff8f0407c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623561 4958 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-kolla-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623611 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623624 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623637 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-default\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623650 4958 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-secrets\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623679 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5afba053-ce3d-4e27-a16f-35dff8f0407c-operator-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623699 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5afba053-ce3d-4e27-a16f-35dff8f0407c-config-data-generated\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623711 4958 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5afba053-ce3d-4e27-a16f-35dff8f0407c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.623721 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7sfv\" (UniqueName: \"kubernetes.io/projected/5afba053-ce3d-4e27-a16f-35dff8f0407c-kube-api-access-j7sfv\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.639445 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.647731 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.724717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725088 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-tls\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725187 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlr5g\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-kube-api-access-xlr5g\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725300 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-erlang-cookie\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725391 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-plugins-conf\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725491 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725586 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f931d71-9f8f-4755-a793-ca326e423199-erlang-cookie-secret\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725673 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-confd\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725760 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-server-conf\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725868 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f931d71-9f8f-4755-a793-ca326e423199-pod-info\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.725983 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-plugins\") pod \"8f931d71-9f8f-4755-a793-ca326e423199\" (UID: \"8f931d71-9f8f-4755-a793-ca326e423199\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.726336 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.726885 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.730278 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.730743 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.731582 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.732364 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-kube-api-access-xlr5g" (OuterVolumeSpecName: "kube-api-access-xlr5g") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "kube-api-access-xlr5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.733871 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f931d71-9f8f-4755-a793-ca326e423199-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.734208 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8f931d71-9f8f-4755-a793-ca326e423199-pod-info" (OuterVolumeSpecName: "pod-info") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.748143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.748515 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data" (OuterVolumeSpecName: "config-data") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.781746 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-server-conf" (OuterVolumeSpecName: "server-conf") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.821633 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8f931d71-9f8f-4755-a793-ca326e423199" (UID: "8f931d71-9f8f-4755-a793-ca326e423199"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827452 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827476 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827490 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlr5g\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-kube-api-access-xlr5g\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827502 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827512 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827548 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827585 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f931d71-9f8f-4755-a793-ca326e423199-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827596 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827607 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f931d71-9f8f-4755-a793-ca326e423199-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827618 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f931d71-9f8f-4755-a793-ca326e423199-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.827628 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f931d71-9f8f-4755-a793-ca326e423199-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.839501 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.844886 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.928801 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-erlang-cookie\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.928849 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-server-conf\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.928894 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8jlm\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-kube-api-access-m8jlm\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.928922 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-plugins\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.928967 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929001 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/442b1534-27bc-4d6d-be46-1ea5689c290f-erlang-cookie-secret\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929022 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929047 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-plugins-conf\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929069 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-tls\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929096 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/442b1534-27bc-4d6d-be46-1ea5689c290f-pod-info\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929163 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-confd\") pod \"442b1534-27bc-4d6d-be46-1ea5689c290f\" (UID: \"442b1534-27bc-4d6d-be46-1ea5689c290f\") " Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.929401 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.930353 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.930565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.931060 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.933557 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b1534-27bc-4d6d-be46-1ea5689c290f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.933791 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.934221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-kube-api-access-m8jlm" (OuterVolumeSpecName: "kube-api-access-m8jlm") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "kube-api-access-m8jlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.934267 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.936296 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/442b1534-27bc-4d6d-be46-1ea5689c290f-pod-info" (OuterVolumeSpecName: "pod-info") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.961974 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data" (OuterVolumeSpecName: "config-data") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.992443 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-server-conf" (OuterVolumeSpecName: "server-conf") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.998110 4958 generic.go:334] "Generic (PLEG): container finished" podID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerID="6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328" exitCode=0 Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.998209 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.998196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5afba053-ce3d-4e27-a16f-35dff8f0407c","Type":"ContainerDied","Data":"6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328"} Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.998873 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5afba053-ce3d-4e27-a16f-35dff8f0407c","Type":"ContainerDied","Data":"6710c933264129bf15adb818961d3e206b937d73de08c6f9f1b42e47593b73bc"} Oct 08 06:56:46 crc kubenswrapper[4958]: I1008 06:56:46.998894 4958 scope.go:117] "RemoveContainer" containerID="6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.005437 4958 generic.go:334] "Generic (PLEG): container finished" podID="8f931d71-9f8f-4755-a793-ca326e423199" containerID="1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf" exitCode=0 Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.005627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f931d71-9f8f-4755-a793-ca326e423199","Type":"ContainerDied","Data":"1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf"} Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.005806 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f931d71-9f8f-4755-a793-ca326e423199","Type":"ContainerDied","Data":"639c33de2cdb91ba5be69e928dd48a70c371510310babcd2bf0a7881554fc0c0"} Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.005683 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.012913 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.013352 4958 generic.go:334] "Generic (PLEG): container finished" podID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerID="a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328" exitCode=0 Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.013745 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"442b1534-27bc-4d6d-be46-1ea5689c290f","Type":"ContainerDied","Data":"a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328"} Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.014083 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"442b1534-27bc-4d6d-be46-1ea5689c290f","Type":"ContainerDied","Data":"288a82c7843344062ba7b3663ffaa6d5a1cd272a549c93d1bcd79ec4cf839b5c"} Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030285 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030315 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030325 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8jlm\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-kube-api-access-m8jlm\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030334 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030344 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030366 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030375 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/442b1534-27bc-4d6d-be46-1ea5689c290f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030384 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030403 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/442b1534-27bc-4d6d-be46-1ea5689c290f-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.030411 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/442b1534-27bc-4d6d-be46-1ea5689c290f-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.042468 4958 scope.go:117] "RemoveContainer" containerID="1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.080715 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "442b1534-27bc-4d6d-be46-1ea5689c290f" (UID: "442b1534-27bc-4d6d-be46-1ea5689c290f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.087252 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.092854 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.094773 4958 scope.go:117] "RemoveContainer" containerID="6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328" Oct 08 06:56:47 crc kubenswrapper[4958]: E1008 06:56:47.095570 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328\": container with ID starting with 6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328 not found: ID does not exist" containerID="6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.095607 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328"} err="failed to get container status \"6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328\": rpc error: code = NotFound desc = could not find container \"6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328\": container with ID starting with 6a003b7724fe923f88526d82b0401c0a90a92d8153c23ef79a24ae3f6bebf328 not found: ID does not exist" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.095637 4958 scope.go:117] "RemoveContainer" containerID="1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed" Oct 08 06:56:47 crc kubenswrapper[4958]: E1008 06:56:47.111013 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed\": container with ID starting with 1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed not found: ID does not exist" containerID="1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.111070 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed"} err="failed to get container status \"1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed\": rpc error: code = NotFound desc = could not find container \"1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed\": container with ID starting with 1c41440da3e30c82cb23899d2723316096293deeac5e32e02745e049d2233fed not found: ID does not exist" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.111098 4958 scope.go:117] "RemoveContainer" containerID="1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.113823 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.121203 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.126499 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.133109 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.133145 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/442b1534-27bc-4d6d-be46-1ea5689c290f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.146770 4958 scope.go:117] "RemoveContainer" containerID="0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.176155 4958 scope.go:117] "RemoveContainer" containerID="1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf" Oct 08 06:56:47 crc kubenswrapper[4958]: E1008 06:56:47.180452 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf\": container with ID starting with 1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf not found: ID does not exist" containerID="1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.180501 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf"} err="failed to get container status \"1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf\": rpc error: code = NotFound desc = could not find container \"1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf\": container with ID starting with 1d573b950369ab786a385be2c41152d03eccc8f00bd4cb94292246558fa2f4cf not found: ID does not exist" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.180530 4958 scope.go:117] "RemoveContainer" containerID="0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c" Oct 08 06:56:47 crc kubenswrapper[4958]: E1008 06:56:47.181965 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c\": container with ID starting with 0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c not found: ID does not exist" containerID="0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.182039 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c"} err="failed to get container status \"0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c\": rpc error: code = NotFound desc = could not find container \"0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c\": container with ID starting with 0e1fc1f9cf7a4831ad3481c4874067196938f233b081693bc7630a8e65c2a45c not found: ID does not exist" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.182084 4958 scope.go:117] "RemoveContainer" containerID="a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.204294 4958 scope.go:117] "RemoveContainer" containerID="5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.225537 4958 scope.go:117] "RemoveContainer" containerID="a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328" Oct 08 06:56:47 crc kubenswrapper[4958]: E1008 06:56:47.226424 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328\": container with ID starting with a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328 not found: ID does not exist" containerID="a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.226527 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328"} err="failed to get container status \"a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328\": rpc error: code = NotFound desc = could not find container \"a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328\": container with ID starting with a151cd70ef2b53f0e4ee05050cec1d3d170fdbc3435f7271a03d4f7f74173328 not found: ID does not exist" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.226622 4958 scope.go:117] "RemoveContainer" containerID="5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971" Oct 08 06:56:47 crc kubenswrapper[4958]: E1008 06:56:47.227123 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971\": container with ID starting with 5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971 not found: ID does not exist" containerID="5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.227166 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971"} err="failed to get container status \"5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971\": rpc error: code = NotFound desc = could not find container \"5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971\": container with ID starting with 5d16e6aee96410a47a392e46dc80e5832f3e7a1cdb911e888ca8bd8bfb542971 not found: ID does not exist" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.353679 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.365445 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.587511 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" path="/var/lib/kubelet/pods/442b1534-27bc-4d6d-be46-1ea5689c290f/volumes" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.588643 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" path="/var/lib/kubelet/pods/5afba053-ce3d-4e27-a16f-35dff8f0407c/volumes" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.590452 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f931d71-9f8f-4755-a793-ca326e423199" path="/var/lib/kubelet/pods/8f931d71-9f8f-4755-a793-ca326e423199/volumes" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.591678 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" path="/var/lib/kubelet/pods/fe1279cb-5369-4347-9fc9-d598103536a9/volumes" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.853780 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:56:47 crc kubenswrapper[4958]: I1008 06:56:47.862127 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.016414 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.022150 4958 generic.go:334] "Generic (PLEG): container finished" podID="46f633ea-236e-46e7-a780-a9912bbd2c91" containerID="86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024" exitCode=0 Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.022199 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b6f8956c-65c6t" event={"ID":"46f633ea-236e-46e7-a780-a9912bbd2c91","Type":"ContainerDied","Data":"86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024"} Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.022221 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56b6f8956c-65c6t" event={"ID":"46f633ea-236e-46e7-a780-a9912bbd2c91","Type":"ContainerDied","Data":"e4268453f23c6216ebb3f49f10b87ff12038deb906ff2bdc77d5ea5e919c1934"} Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.022238 4958 scope.go:117] "RemoveContainer" containerID="86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.022317 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56b6f8956c-65c6t" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.026529 4958 generic.go:334] "Generic (PLEG): container finished" podID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" exitCode=0 Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.026569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f196852b-bfdf-43dd-9579-3ecd8601e7bf","Type":"ContainerDied","Data":"6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e"} Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.026584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f196852b-bfdf-43dd-9579-3ecd8601e7bf","Type":"ContainerDied","Data":"62cefbec2a6d0b04edcde390861946520e12b2c889703b44789b730f2e93054d"} Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.026627 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.028489 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" exitCode=0 Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.028510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f","Type":"ContainerDied","Data":"bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0"} Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.028526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f","Type":"ContainerDied","Data":"82a5d1d59b692c43b87203d0cc9e50be05a7fc2c2d4585fbf3eada0e481252c7"} Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.028555 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-fernet-keys\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-combined-ca-bundle\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044600 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74gmw\" (UniqueName: \"kubernetes.io/projected/46f633ea-236e-46e7-a780-a9912bbd2c91-kube-api-access-74gmw\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044656 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpgzl\" (UniqueName: \"kubernetes.io/projected/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-kube-api-access-wpgzl\") pod \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044680 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-scripts\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-config-data\") pod \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-public-tls-certs\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044791 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-combined-ca-bundle\") pod \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\" (UID: \"e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044839 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-credential-keys\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044858 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-internal-tls-certs\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.044875 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-config-data\") pod \"46f633ea-236e-46e7-a780-a9912bbd2c91\" (UID: \"46f633ea-236e-46e7-a780-a9912bbd2c91\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.048297 4958 scope.go:117] "RemoveContainer" containerID="86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024" Oct 08 06:56:48 crc kubenswrapper[4958]: E1008 06:56:48.049214 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024\": container with ID starting with 86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024 not found: ID does not exist" containerID="86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.049272 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024"} err="failed to get container status \"86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024\": rpc error: code = NotFound desc = could not find container \"86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024\": container with ID starting with 86278530dcbe6b12af4202029dcd2705840a1d5866deacfbe958e071ac723024 not found: ID does not exist" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.049304 4958 scope.go:117] "RemoveContainer" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.049569 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.052927 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-scripts" (OuterVolumeSpecName: "scripts") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.058108 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.061372 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-kube-api-access-wpgzl" (OuterVolumeSpecName: "kube-api-access-wpgzl") pod "e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" (UID: "e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f"). InnerVolumeSpecName "kube-api-access-wpgzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.078176 4958 scope.go:117] "RemoveContainer" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.078201 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f633ea-236e-46e7-a780-a9912bbd2c91-kube-api-access-74gmw" (OuterVolumeSpecName: "kube-api-access-74gmw") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "kube-api-access-74gmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: E1008 06:56:48.078645 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e\": container with ID starting with 6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e not found: ID does not exist" containerID="6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.078695 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e"} err="failed to get container status \"6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e\": rpc error: code = NotFound desc = could not find container \"6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e\": container with ID starting with 6dacb4d44952e92be642b23f1eb65c0a59c24f8b06f327448896b48160845a1e not found: ID does not exist" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.078717 4958 scope.go:117] "RemoveContainer" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.090657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-config-data" (OuterVolumeSpecName: "config-data") pod "e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" (UID: "e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.094128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" (UID: "e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.094158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.096500 4958 scope.go:117] "RemoveContainer" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" Oct 08 06:56:48 crc kubenswrapper[4958]: E1008 06:56:48.096776 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0\": container with ID starting with bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0 not found: ID does not exist" containerID="bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.096816 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0"} err="failed to get container status \"bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0\": rpc error: code = NotFound desc = could not find container \"bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0\": container with ID starting with bf94431116077534212aa630f54c35894cd8f2c855e5d07d49de82834f9593c0 not found: ID does not exist" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.097107 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-config-data" (OuterVolumeSpecName: "config-data") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.121022 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.130159 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "46f633ea-236e-46e7-a780-a9912bbd2c91" (UID: "46f633ea-236e-46e7-a780-a9912bbd2c91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.146394 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-config-data\") pod \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.146460 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-combined-ca-bundle\") pod \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.146672 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw2dd\" (UniqueName: \"kubernetes.io/projected/f196852b-bfdf-43dd-9579-3ecd8601e7bf-kube-api-access-rw2dd\") pod \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\" (UID: \"f196852b-bfdf-43dd-9579-3ecd8601e7bf\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.146980 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.146997 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147007 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147016 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147025 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147033 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74gmw\" (UniqueName: \"kubernetes.io/projected/46f633ea-236e-46e7-a780-a9912bbd2c91-kube-api-access-74gmw\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147043 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpgzl\" (UniqueName: \"kubernetes.io/projected/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-kube-api-access-wpgzl\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147051 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147060 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147067 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f633ea-236e-46e7-a780-a9912bbd2c91-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.147078 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.153037 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f196852b-bfdf-43dd-9579-3ecd8601e7bf-kube-api-access-rw2dd" (OuterVolumeSpecName: "kube-api-access-rw2dd") pod "f196852b-bfdf-43dd-9579-3ecd8601e7bf" (UID: "f196852b-bfdf-43dd-9579-3ecd8601e7bf"). InnerVolumeSpecName "kube-api-access-rw2dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.167904 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f196852b-bfdf-43dd-9579-3ecd8601e7bf" (UID: "f196852b-bfdf-43dd-9579-3ecd8601e7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.168493 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-config-data" (OuterVolumeSpecName: "config-data") pod "f196852b-bfdf-43dd-9579-3ecd8601e7bf" (UID: "f196852b-bfdf-43dd-9579-3ecd8601e7bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.248058 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw2dd\" (UniqueName: \"kubernetes.io/projected/f196852b-bfdf-43dd-9579-3ecd8601e7bf-kube-api-access-rw2dd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.248088 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.248099 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f196852b-bfdf-43dd-9579-3ecd8601e7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.383563 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-56b6f8956c-65c6t"] Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.409137 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-56b6f8956c-65c6t"] Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.427324 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.442156 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.447899 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.451603 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.652258 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-config-data\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676104 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-combined-ca-bundle\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676141 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-log-httpd\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676214 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxnhc\" (UniqueName: \"kubernetes.io/projected/90612def-876b-4ae6-88e6-7f3de02515e6-kube-api-access-xxnhc\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676252 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-sg-core-conf-yaml\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676278 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-scripts\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676299 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-ceilometer-tls-certs\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.676322 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-run-httpd\") pod \"90612def-876b-4ae6-88e6-7f3de02515e6\" (UID: \"90612def-876b-4ae6-88e6-7f3de02515e6\") " Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.677426 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.677438 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.685736 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90612def-876b-4ae6-88e6-7f3de02515e6-kube-api-access-xxnhc" (OuterVolumeSpecName: "kube-api-access-xxnhc") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "kube-api-access-xxnhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.690444 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-scripts" (OuterVolumeSpecName: "scripts") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.718066 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.752327 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.770770 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778496 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778516 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778525 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxnhc\" (UniqueName: \"kubernetes.io/projected/90612def-876b-4ae6-88e6-7f3de02515e6-kube-api-access-xxnhc\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778536 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778544 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778554 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.778562 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90612def-876b-4ae6-88e6-7f3de02515e6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.780886 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-config-data" (OuterVolumeSpecName: "config-data") pod "90612def-876b-4ae6-88e6-7f3de02515e6" (UID: "90612def-876b-4ae6-88e6-7f3de02515e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:56:48 crc kubenswrapper[4958]: I1008 06:56:48.879765 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90612def-876b-4ae6-88e6-7f3de02515e6-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.046294 4958 generic.go:334] "Generic (PLEG): container finished" podID="90612def-876b-4ae6-88e6-7f3de02515e6" containerID="68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2" exitCode=0 Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.046640 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerDied","Data":"68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2"} Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.046788 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90612def-876b-4ae6-88e6-7f3de02515e6","Type":"ContainerDied","Data":"321f7570be8af4a0f3450c3585f6f936d73e0f3b6871ac8372aff4a00c88cbc0"} Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.046990 4958 scope.go:117] "RemoveContainer" containerID="ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.047238 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.082869 4958 scope.go:117] "RemoveContainer" containerID="0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.106577 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.115431 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.123385 4958 scope.go:117] "RemoveContainer" containerID="68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.150121 4958 scope.go:117] "RemoveContainer" containerID="2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.169874 4958 scope.go:117] "RemoveContainer" containerID="ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e" Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.170395 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e\": container with ID starting with ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e not found: ID does not exist" containerID="ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.170428 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e"} err="failed to get container status \"ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e\": rpc error: code = NotFound desc = could not find container \"ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e\": container with ID starting with ba7cba23461cfa60d06e4353ca3b9cfa8431aae9505a7f4bb4c5c971360e0d5e not found: ID does not exist" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.170450 4958 scope.go:117] "RemoveContainer" containerID="0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060" Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.170804 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060\": container with ID starting with 0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060 not found: ID does not exist" containerID="0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.170828 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060"} err="failed to get container status \"0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060\": rpc error: code = NotFound desc = could not find container \"0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060\": container with ID starting with 0af08bb335a7e945e0ddce748679a5557f4a3a541f9fc1355b64400fd3f89060 not found: ID does not exist" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.170841 4958 scope.go:117] "RemoveContainer" containerID="68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2" Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.171214 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2\": container with ID starting with 68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2 not found: ID does not exist" containerID="68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.171233 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2"} err="failed to get container status \"68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2\": rpc error: code = NotFound desc = could not find container \"68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2\": container with ID starting with 68ce1c4c20caacc4b4bf8c9c201771f0dd5518aadab7a90b068859b08e2891e2 not found: ID does not exist" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.171246 4958 scope.go:117] "RemoveContainer" containerID="2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a" Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.171590 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a\": container with ID starting with 2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a not found: ID does not exist" containerID="2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.171644 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a"} err="failed to get container status \"2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a\": rpc error: code = NotFound desc = could not find container \"2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a\": container with ID starting with 2cdd00dcc49f27e886ed7f8e467ce10332efd4b1b22f6c39eb2a00f35e73697a not found: ID does not exist" Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.466799 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.468258 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.468366 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.469354 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.469467 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.469848 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.471600 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:49 crc kubenswrapper[4958]: E1008 06:56:49.471701 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.589046 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f633ea-236e-46e7-a780-a9912bbd2c91" path="/var/lib/kubelet/pods/46f633ea-236e-46e7-a780-a9912bbd2c91/volumes" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.590101 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" path="/var/lib/kubelet/pods/90612def-876b-4ae6-88e6-7f3de02515e6/volumes" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.591046 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" path="/var/lib/kubelet/pods/e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f/volumes" Oct 08 06:56:49 crc kubenswrapper[4958]: I1008 06:56:49.592278 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" path="/var/lib/kubelet/pods/f196852b-bfdf-43dd-9579-3ecd8601e7bf/volumes" Oct 08 06:56:52 crc kubenswrapper[4958]: I1008 06:56:52.743234 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:52 crc kubenswrapper[4958]: I1008 06:56:52.743343 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.469299 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.470757 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.471352 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.471400 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.472197 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.474295 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.476491 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:54 crc kubenswrapper[4958]: E1008 06:56:54.476533 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:56:57 crc kubenswrapper[4958]: I1008 06:56:57.752331 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:57 crc kubenswrapper[4958]: I1008 06:56:57.752860 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.467742 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.468938 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.469625 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.470409 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.470471 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.473527 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.475560 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:56:59 crc kubenswrapper[4958]: E1008 06:56:59.475621 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.753922 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.785590 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-config\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.785713 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-ovndb-tls-certs\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.785801 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-public-tls-certs\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.785845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-internal-tls-certs\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.785980 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-httpd-config\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.786063 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4xxf\" (UniqueName: \"kubernetes.io/projected/a720c647-fed0-4c66-83ed-ab4c03fc68ba-kube-api-access-v4xxf\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.786099 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-combined-ca-bundle\") pod \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\" (UID: \"a720c647-fed0-4c66-83ed-ab4c03fc68ba\") " Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.825736 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.826657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a720c647-fed0-4c66-83ed-ab4c03fc68ba-kube-api-access-v4xxf" (OuterVolumeSpecName: "kube-api-access-v4xxf") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "kube-api-access-v4xxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.854607 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-config" (OuterVolumeSpecName: "config") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.860045 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.869387 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.875232 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.888640 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.888685 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.888708 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.888726 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.888745 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4xxf\" (UniqueName: \"kubernetes.io/projected/a720c647-fed0-4c66-83ed-ab4c03fc68ba-kube-api-access-v4xxf\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.888763 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.897208 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a720c647-fed0-4c66-83ed-ab4c03fc68ba" (UID: "a720c647-fed0-4c66-83ed-ab4c03fc68ba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:01 crc kubenswrapper[4958]: I1008 06:57:01.990613 4958 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a720c647-fed0-4c66-83ed-ab4c03fc68ba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.242557 4958 generic.go:334] "Generic (PLEG): container finished" podID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerID="84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c" exitCode=0 Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.242636 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-845b57c9c7-mn8f6" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.242638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-845b57c9c7-mn8f6" event={"ID":"a720c647-fed0-4c66-83ed-ab4c03fc68ba","Type":"ContainerDied","Data":"84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c"} Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.242836 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-845b57c9c7-mn8f6" event={"ID":"a720c647-fed0-4c66-83ed-ab4c03fc68ba","Type":"ContainerDied","Data":"1e3cc1e6b8f302d7abda1b12847225d2aaa807e0e2e744d0b4ccb0d2f3675385"} Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.242883 4958 scope.go:117] "RemoveContainer" containerID="e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.338568 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-845b57c9c7-mn8f6"] Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.344627 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-845b57c9c7-mn8f6"] Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.346852 4958 scope.go:117] "RemoveContainer" containerID="84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.381212 4958 scope.go:117] "RemoveContainer" containerID="e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419" Oct 08 06:57:02 crc kubenswrapper[4958]: E1008 06:57:02.382162 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419\": container with ID starting with e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419 not found: ID does not exist" containerID="e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.382262 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419"} err="failed to get container status \"e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419\": rpc error: code = NotFound desc = could not find container \"e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419\": container with ID starting with e4bcd5f4d6a4cf4432ebda09962729c94754a433d4babeeddaccaced2bd1b419 not found: ID does not exist" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.382304 4958 scope.go:117] "RemoveContainer" containerID="84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c" Oct 08 06:57:02 crc kubenswrapper[4958]: E1008 06:57:02.382783 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c\": container with ID starting with 84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c not found: ID does not exist" containerID="84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.382819 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c"} err="failed to get container status \"84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c\": rpc error: code = NotFound desc = could not find container \"84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c\": container with ID starting with 84cc28a7bcc9b6378dbaf0a2893e4b30dd9ff13ea289f971aaa533e40205724c not found: ID does not exist" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.763203 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:57:02 crc kubenswrapper[4958]: I1008 06:57:02.763229 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:57:03 crc kubenswrapper[4958]: I1008 06:57:03.592245 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" path="/var/lib/kubelet/pods/a720c647-fed0-4c66-83ed-ab4c03fc68ba/volumes" Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.467494 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.468601 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.468683 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.471043 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.472570 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.472804 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.474755 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Oct 08 06:57:04 crc kubenswrapper[4958]: E1008 06:57:04.474834 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-p4rkp" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:57:07 crc kubenswrapper[4958]: I1008 06:57:07.773145 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 06:57:07 crc kubenswrapper[4958]: I1008 06:57:07.773187 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.099543 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p4rkp_bffcae1a-1024-41fe-95f7-c20090e1a4fa/ovs-vswitchd/0.log" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.101132 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.255568 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264334 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27dbn\" (UniqueName: \"kubernetes.io/projected/bffcae1a-1024-41fe-95f7-c20090e1a4fa-kube-api-access-27dbn\") pod \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264398 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-run\") pod \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264486 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffcae1a-1024-41fe-95f7-c20090e1a4fa-scripts\") pod \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264515 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-lib\") pod \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264558 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-etc-ovs\") pod \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264547 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-run" (OuterVolumeSpecName: "var-run") pod "bffcae1a-1024-41fe-95f7-c20090e1a4fa" (UID: "bffcae1a-1024-41fe-95f7-c20090e1a4fa"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264580 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-log\") pod \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\" (UID: \"bffcae1a-1024-41fe-95f7-c20090e1a4fa\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264603 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-lib" (OuterVolumeSpecName: "var-lib") pod "bffcae1a-1024-41fe-95f7-c20090e1a4fa" (UID: "bffcae1a-1024-41fe-95f7-c20090e1a4fa"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264628 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "bffcae1a-1024-41fe-95f7-c20090e1a4fa" (UID: "bffcae1a-1024-41fe-95f7-c20090e1a4fa"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264727 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-log" (OuterVolumeSpecName: "var-log") pod "bffcae1a-1024-41fe-95f7-c20090e1a4fa" (UID: "bffcae1a-1024-41fe-95f7-c20090e1a4fa"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264864 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264875 4958 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-lib\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264883 4958 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-etc-ovs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.264891 4958 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bffcae1a-1024-41fe-95f7-c20090e1a4fa-var-log\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.266566 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffcae1a-1024-41fe-95f7-c20090e1a4fa-scripts" (OuterVolumeSpecName: "scripts") pod "bffcae1a-1024-41fe-95f7-c20090e1a4fa" (UID: "bffcae1a-1024-41fe-95f7-c20090e1a4fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.276199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bffcae1a-1024-41fe-95f7-c20090e1a4fa-kube-api-access-27dbn" (OuterVolumeSpecName: "kube-api-access-27dbn") pod "bffcae1a-1024-41fe-95f7-c20090e1a4fa" (UID: "bffcae1a-1024-41fe-95f7-c20090e1a4fa"). InnerVolumeSpecName "kube-api-access-27dbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.323881 4958 generic.go:334] "Generic (PLEG): container finished" podID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerID="c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc" exitCode=137 Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.323955 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23d77fbe-4e70-428d-92b3-926fb7f5547e","Type":"ContainerDied","Data":"c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc"} Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.323981 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23d77fbe-4e70-428d-92b3-926fb7f5547e","Type":"ContainerDied","Data":"0bb5e4e05ddeaf8566136b433b64a215bc1eae669b7d8ff712733d9d7a635cff"} Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.323995 4958 scope.go:117] "RemoveContainer" containerID="d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.324099 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.327390 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p4rkp_bffcae1a-1024-41fe-95f7-c20090e1a4fa/ovs-vswitchd/0.log" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.328053 4958 generic.go:334] "Generic (PLEG): container finished" podID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" exitCode=137 Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.328119 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerDied","Data":"b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea"} Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.328147 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p4rkp" event={"ID":"bffcae1a-1024-41fe-95f7-c20090e1a4fa","Type":"ContainerDied","Data":"49cbc21bca73a06a594e876d0d902080856b147afb42af74b4b07bcdfa970ccc"} Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.328151 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p4rkp" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.339150 4958 generic.go:334] "Generic (PLEG): container finished" podID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerID="1fcb4d69732f8668a00f6659ff61c3c3e89fb140ae20ef55148d18dda7b7d854" exitCode=137 Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.339228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"1fcb4d69732f8668a00f6659ff61c3c3e89fb140ae20ef55148d18dda7b7d854"} Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.355164 4958 scope.go:117] "RemoveContainer" containerID="c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.369719 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-combined-ca-bundle\") pod \"23d77fbe-4e70-428d-92b3-926fb7f5547e\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.369766 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq6rm\" (UniqueName: \"kubernetes.io/projected/23d77fbe-4e70-428d-92b3-926fb7f5547e-kube-api-access-mq6rm\") pod \"23d77fbe-4e70-428d-92b3-926fb7f5547e\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.369792 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data\") pod \"23d77fbe-4e70-428d-92b3-926fb7f5547e\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.369852 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data-custom\") pod \"23d77fbe-4e70-428d-92b3-926fb7f5547e\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.370083 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-scripts\") pod \"23d77fbe-4e70-428d-92b3-926fb7f5547e\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.370109 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23d77fbe-4e70-428d-92b3-926fb7f5547e-etc-machine-id\") pod \"23d77fbe-4e70-428d-92b3-926fb7f5547e\" (UID: \"23d77fbe-4e70-428d-92b3-926fb7f5547e\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.370394 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bffcae1a-1024-41fe-95f7-c20090e1a4fa-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.370412 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27dbn\" (UniqueName: \"kubernetes.io/projected/bffcae1a-1024-41fe-95f7-c20090e1a4fa-kube-api-access-27dbn\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.370441 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23d77fbe-4e70-428d-92b3-926fb7f5547e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "23d77fbe-4e70-428d-92b3-926fb7f5547e" (UID: "23d77fbe-4e70-428d-92b3-926fb7f5547e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.374147 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-scripts" (OuterVolumeSpecName: "scripts") pod "23d77fbe-4e70-428d-92b3-926fb7f5547e" (UID: "23d77fbe-4e70-428d-92b3-926fb7f5547e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.375894 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-p4rkp"] Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.377167 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "23d77fbe-4e70-428d-92b3-926fb7f5547e" (UID: "23d77fbe-4e70-428d-92b3-926fb7f5547e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.377458 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d77fbe-4e70-428d-92b3-926fb7f5547e-kube-api-access-mq6rm" (OuterVolumeSpecName: "kube-api-access-mq6rm") pod "23d77fbe-4e70-428d-92b3-926fb7f5547e" (UID: "23d77fbe-4e70-428d-92b3-926fb7f5547e"). InnerVolumeSpecName "kube-api-access-mq6rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.378969 4958 scope.go:117] "RemoveContainer" containerID="d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.379062 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-p4rkp"] Oct 08 06:57:09 crc kubenswrapper[4958]: E1008 06:57:09.379526 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b\": container with ID starting with d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b not found: ID does not exist" containerID="d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.379555 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b"} err="failed to get container status \"d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b\": rpc error: code = NotFound desc = could not find container \"d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b\": container with ID starting with d0cab1c319b0805fcf376df64f2c0efa43e3a49ab93decdf99cdc592d551d40b not found: ID does not exist" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.379576 4958 scope.go:117] "RemoveContainer" containerID="c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc" Oct 08 06:57:09 crc kubenswrapper[4958]: E1008 06:57:09.379817 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc\": container with ID starting with c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc not found: ID does not exist" containerID="c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.379837 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc"} err="failed to get container status \"c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc\": rpc error: code = NotFound desc = could not find container \"c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc\": container with ID starting with c02be257fd4e27e118cc5b47243544c38b48ad8ee8c4597abe7972db9dcf30bc not found: ID does not exist" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.379850 4958 scope.go:117] "RemoveContainer" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.399227 4958 scope.go:117] "RemoveContainer" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.410493 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23d77fbe-4e70-428d-92b3-926fb7f5547e" (UID: "23d77fbe-4e70-428d-92b3-926fb7f5547e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.414893 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.416725 4958 scope.go:117] "RemoveContainer" containerID="e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.441465 4958 scope.go:117] "RemoveContainer" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.455245 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data" (OuterVolumeSpecName: "config-data") pod "23d77fbe-4e70-428d-92b3-926fb7f5547e" (UID: "23d77fbe-4e70-428d-92b3-926fb7f5547e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: E1008 06:57:09.459229 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea\": container with ID starting with b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea not found: ID does not exist" containerID="b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.459279 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea"} err="failed to get container status \"b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea\": rpc error: code = NotFound desc = could not find container \"b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea\": container with ID starting with b52cdd4c3587c2b1596fde82505ab4a63a34bffdde1aec0e3056d507053747ea not found: ID does not exist" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.459309 4958 scope.go:117] "RemoveContainer" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" Oct 08 06:57:09 crc kubenswrapper[4958]: E1008 06:57:09.460060 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b\": container with ID starting with 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b not found: ID does not exist" containerID="227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.460102 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b"} err="failed to get container status \"227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b\": rpc error: code = NotFound desc = could not find container \"227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b\": container with ID starting with 227b93e512590ccd3480fad05eec6c5f7486e885d8ea82ef1069474c5f6cbf0b not found: ID does not exist" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.460151 4958 scope.go:117] "RemoveContainer" containerID="e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806" Oct 08 06:57:09 crc kubenswrapper[4958]: E1008 06:57:09.460467 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806\": container with ID starting with e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806 not found: ID does not exist" containerID="e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.460496 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806"} err="failed to get container status \"e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806\": rpc error: code = NotFound desc = could not find container \"e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806\": container with ID starting with e3090dfe3fa78ec4885b89db3e9af4e5534c13e11dd608cc268457d893b1a806 not found: ID does not exist" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.471988 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.472034 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23d77fbe-4e70-428d-92b3-926fb7f5547e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.472054 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.472073 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq6rm\" (UniqueName: \"kubernetes.io/projected/23d77fbe-4e70-428d-92b3-926fb7f5547e-kube-api-access-mq6rm\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.472090 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.472107 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23d77fbe-4e70-428d-92b3-926fb7f5547e-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.572625 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-cache\") pod \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.572690 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") pod \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.572783 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.572824 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-lock\") pod \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.572868 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbbbv\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-kube-api-access-gbbbv\") pod \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\" (UID: \"6c45aa0e-9caf-42e6-bfbb-59c802d81c98\") " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.573521 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-lock" (OuterVolumeSpecName: "lock") pod "6c45aa0e-9caf-42e6-bfbb-59c802d81c98" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.573555 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-cache" (OuterVolumeSpecName: "cache") pod "6c45aa0e-9caf-42e6-bfbb-59c802d81c98" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.577994 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6c45aa0e-9caf-42e6-bfbb-59c802d81c98" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.578414 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "6c45aa0e-9caf-42e6-bfbb-59c802d81c98" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.579645 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-kube-api-access-gbbbv" (OuterVolumeSpecName: "kube-api-access-gbbbv") pod "6c45aa0e-9caf-42e6-bfbb-59c802d81c98" (UID: "6c45aa0e-9caf-42e6-bfbb-59c802d81c98"). InnerVolumeSpecName "kube-api-access-gbbbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.592782 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" path="/var/lib/kubelet/pods/bffcae1a-1024-41fe-95f7-c20090e1a4fa/volumes" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.674611 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbbbv\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-kube-api-access-gbbbv\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.674652 4958 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-cache\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.674667 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.674709 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.674721 4958 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c45aa0e-9caf-42e6-bfbb-59c802d81c98-lock\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.695363 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.709814 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.716943 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 06:57:09 crc kubenswrapper[4958]: I1008 06:57:09.776189 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.367722 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"6c45aa0e-9caf-42e6-bfbb-59c802d81c98","Type":"ContainerDied","Data":"df3e5ea7795d70faac233c1f9b362c79016f98a7f5d2f57257c59a331f30c59d"} Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.367812 4958 scope.go:117] "RemoveContainer" containerID="1fcb4d69732f8668a00f6659ff61c3c3e89fb140ae20ef55148d18dda7b7d854" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.367861 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.415590 4958 scope.go:117] "RemoveContainer" containerID="96600bb67add0f430f3d3f7bc50bb148f53a95e2d96aa280884db2079910fcc5" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.417905 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.427110 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.446616 4958 scope.go:117] "RemoveContainer" containerID="822bb196cd354e0752c30e33f034860b8aa7c4cf0eef390c7e6067b9260d966e" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.478094 4958 scope.go:117] "RemoveContainer" containerID="21df3155fa52a2087045a5927cde073b2ca8d2f30fa88ba4c74a281b2f3fb7bc" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.516897 4958 scope.go:117] "RemoveContainer" containerID="b608a4dbe063d74c26342e7c42bce09e30df95c744884718ee615000db2bfb36" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.551227 4958 scope.go:117] "RemoveContainer" containerID="86bfed71cefd8eec097df2c1d8e6ba77ccea56ba62521cec70c88418d4a40639" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.585114 4958 scope.go:117] "RemoveContainer" containerID="724ed394bbd6e962a7c26ddc942e11038dcd01c833577f0f40fabd8ae1c82655" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.634728 4958 scope.go:117] "RemoveContainer" containerID="ef947aa4d03942a34c6bc85ef6fb4b0f2207baa004e3f6b80e29968a08b1d8cb" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.672549 4958 scope.go:117] "RemoveContainer" containerID="f6755a074a4c40620709a87ac6599a018ae64733e179e086ef57f6d9ce9dc4d0" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.712931 4958 scope.go:117] "RemoveContainer" containerID="66d5f937dbd1abd5a08b1b2ddf3cc40f3eb8fc2793d2eff9531136500ae61e84" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.742712 4958 scope.go:117] "RemoveContainer" containerID="0a5a60f42124a354930e3b3306d5ebb23d9d56694f6a482b36b814b7a29405e6" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.771181 4958 scope.go:117] "RemoveContainer" containerID="c13727bb3092a9324ff420e4b08230ab8156bbd4c580c0f6905a950356a2b45b" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.794686 4958 scope.go:117] "RemoveContainer" containerID="f0de83255ba3108743dca08b70ec1668b51eb795dc13a54cf5cbf4041775bd0b" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.819986 4958 scope.go:117] "RemoveContainer" containerID="4afa360de4cdd22485317e27cb843f86474d888319ad6c43a75e14506d8d3331" Oct 08 06:57:10 crc kubenswrapper[4958]: I1008 06:57:10.847415 4958 scope.go:117] "RemoveContainer" containerID="240aa008ea9290a60c7264968b55d9ba55c52133e52044e5dfbe81d67bf1b1b8" Oct 08 06:57:11 crc kubenswrapper[4958]: I1008 06:57:11.395250 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": EOF" Oct 08 06:57:11 crc kubenswrapper[4958]: I1008 06:57:11.395362 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6dbff5b58b-dsj98" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.207:9311/healthcheck\": EOF" Oct 08 06:57:11 crc kubenswrapper[4958]: I1008 06:57:11.592646 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" path="/var/lib/kubelet/pods/23d77fbe-4e70-428d-92b3-926fb7f5547e/volumes" Oct 08 06:57:11 crc kubenswrapper[4958]: I1008 06:57:11.594750 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" path="/var/lib/kubelet/pods/6c45aa0e-9caf-42e6-bfbb-59c802d81c98/volumes" Oct 08 06:57:11 crc kubenswrapper[4958]: I1008 06:57:11.896078 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.012777 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.012840 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jgz8\" (UniqueName: \"kubernetes.io/projected/1792ed77-ae79-44a5-80a3-7a67a8031d75-kube-api-access-6jgz8\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.012890 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1792ed77-ae79-44a5-80a3-7a67a8031d75-logs\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.012920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-internal-tls-certs\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.012964 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-combined-ca-bundle\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.013037 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data-custom\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.013131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-public-tls-certs\") pod \"1792ed77-ae79-44a5-80a3-7a67a8031d75\" (UID: \"1792ed77-ae79-44a5-80a3-7a67a8031d75\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.013990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1792ed77-ae79-44a5-80a3-7a67a8031d75-logs" (OuterVolumeSpecName: "logs") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.020290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.023214 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1792ed77-ae79-44a5-80a3-7a67a8031d75-kube-api-access-6jgz8" (OuterVolumeSpecName: "kube-api-access-6jgz8") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "kube-api-access-6jgz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.039428 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.052096 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.066727 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data" (OuterVolumeSpecName: "config-data") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.069842 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1792ed77-ae79-44a5-80a3-7a67a8031d75" (UID: "1792ed77-ae79-44a5-80a3-7a67a8031d75"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114527 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114569 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jgz8\" (UniqueName: \"kubernetes.io/projected/1792ed77-ae79-44a5-80a3-7a67a8031d75-kube-api-access-6jgz8\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114584 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1792ed77-ae79-44a5-80a3-7a67a8031d75-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114597 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114608 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114619 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.114629 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1792ed77-ae79-44a5-80a3-7a67a8031d75-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.418622 4958 generic.go:334] "Generic (PLEG): container finished" podID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerID="b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3" exitCode=137 Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.418721 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dbff5b58b-dsj98" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.418741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbff5b58b-dsj98" event={"ID":"1792ed77-ae79-44a5-80a3-7a67a8031d75","Type":"ContainerDied","Data":"b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3"} Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.418845 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dbff5b58b-dsj98" event={"ID":"1792ed77-ae79-44a5-80a3-7a67a8031d75","Type":"ContainerDied","Data":"6f555752fc6174edb88cbf19e289d81ff16e8cbe80c1621bcce420fe4c8ce9bf"} Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.418886 4958 scope.go:117] "RemoveContainer" containerID="b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.462514 4958 scope.go:117] "RemoveContainer" containerID="6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.473772 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6dbff5b58b-dsj98"] Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.482306 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6dbff5b58b-dsj98"] Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.491437 4958 scope.go:117] "RemoveContainer" containerID="b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3" Oct 08 06:57:12 crc kubenswrapper[4958]: E1008 06:57:12.492225 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3\": container with ID starting with b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3 not found: ID does not exist" containerID="b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.492297 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3"} err="failed to get container status \"b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3\": rpc error: code = NotFound desc = could not find container \"b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3\": container with ID starting with b7792d9a64e0c787934db1685bfa23a56a4744bb1fbed4013eb67dffa23a6bb3 not found: ID does not exist" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.492342 4958 scope.go:117] "RemoveContainer" containerID="6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041" Oct 08 06:57:12 crc kubenswrapper[4958]: E1008 06:57:12.493095 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041\": container with ID starting with 6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041 not found: ID does not exist" containerID="6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.493146 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041"} err="failed to get container status \"6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041\": rpc error: code = NotFound desc = could not find container \"6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041\": container with ID starting with 6b1cfb580b4d593c51a5e15d8aca7dc77cdda3cad24f2f2cbdff273566f0f041 not found: ID does not exist" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.880600 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.925238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-combined-ca-bundle\") pod \"b6c32c16-960f-4a65-abd5-f435d16932f0\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.925394 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data-custom\") pod \"b6c32c16-960f-4a65-abd5-f435d16932f0\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.925463 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs8sv\" (UniqueName: \"kubernetes.io/projected/b6c32c16-960f-4a65-abd5-f435d16932f0-kube-api-access-gs8sv\") pod \"b6c32c16-960f-4a65-abd5-f435d16932f0\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.925520 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data\") pod \"b6c32c16-960f-4a65-abd5-f435d16932f0\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.925591 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c32c16-960f-4a65-abd5-f435d16932f0-logs\") pod \"b6c32c16-960f-4a65-abd5-f435d16932f0\" (UID: \"b6c32c16-960f-4a65-abd5-f435d16932f0\") " Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.926486 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c32c16-960f-4a65-abd5-f435d16932f0-logs" (OuterVolumeSpecName: "logs") pod "b6c32c16-960f-4a65-abd5-f435d16932f0" (UID: "b6c32c16-960f-4a65-abd5-f435d16932f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.929897 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c32c16-960f-4a65-abd5-f435d16932f0-kube-api-access-gs8sv" (OuterVolumeSpecName: "kube-api-access-gs8sv") pod "b6c32c16-960f-4a65-abd5-f435d16932f0" (UID: "b6c32c16-960f-4a65-abd5-f435d16932f0"). InnerVolumeSpecName "kube-api-access-gs8sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.935015 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6c32c16-960f-4a65-abd5-f435d16932f0" (UID: "b6c32c16-960f-4a65-abd5-f435d16932f0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.965500 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c32c16-960f-4a65-abd5-f435d16932f0" (UID: "b6c32c16-960f-4a65-abd5-f435d16932f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:12 crc kubenswrapper[4958]: I1008 06:57:12.978023 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data" (OuterVolumeSpecName: "config-data") pod "b6c32c16-960f-4a65-abd5-f435d16932f0" (UID: "b6c32c16-960f-4a65-abd5-f435d16932f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.027607 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.027643 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.027657 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs8sv\" (UniqueName: \"kubernetes.io/projected/b6c32c16-960f-4a65-abd5-f435d16932f0-kube-api-access-gs8sv\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.027673 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c32c16-960f-4a65-abd5-f435d16932f0-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.027684 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c32c16-960f-4a65-abd5-f435d16932f0-logs\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.435170 4958 generic.go:334] "Generic (PLEG): container finished" podID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerID="27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299" exitCode=137 Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.435280 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.435269 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" event={"ID":"b6c32c16-960f-4a65-abd5-f435d16932f0","Type":"ContainerDied","Data":"27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299"} Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.435501 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-d4d8d7b54-5xwrj" event={"ID":"b6c32c16-960f-4a65-abd5-f435d16932f0","Type":"ContainerDied","Data":"c2751b211bda65475b8e5fe09ad88f045d9a2fe01bcc25f62e65deda39e2d2e1"} Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.435750 4958 scope.go:117] "RemoveContainer" containerID="27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.486861 4958 scope.go:117] "RemoveContainer" containerID="896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.492127 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-d4d8d7b54-5xwrj"] Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.496823 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-d4d8d7b54-5xwrj"] Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.510391 4958 scope.go:117] "RemoveContainer" containerID="27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299" Oct 08 06:57:13 crc kubenswrapper[4958]: E1008 06:57:13.510805 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299\": container with ID starting with 27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299 not found: ID does not exist" containerID="27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.510845 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299"} err="failed to get container status \"27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299\": rpc error: code = NotFound desc = could not find container \"27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299\": container with ID starting with 27bbafddf30930dff927d39b61c70f7c9489b8e2196bc77df47e567762b21299 not found: ID does not exist" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.510870 4958 scope.go:117] "RemoveContainer" containerID="896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a" Oct 08 06:57:13 crc kubenswrapper[4958]: E1008 06:57:13.511162 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a\": container with ID starting with 896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a not found: ID does not exist" containerID="896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.511190 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a"} err="failed to get container status \"896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a\": rpc error: code = NotFound desc = could not find container \"896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a\": container with ID starting with 896ffc50d5b239aeba18612320b883f4cf61f5d7ccddea0fb20903a8aa88e83a not found: ID does not exist" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.594593 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" path="/var/lib/kubelet/pods/1792ed77-ae79-44a5-80a3-7a67a8031d75/volumes" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.596136 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" path="/var/lib/kubelet/pods/b6c32c16-960f-4a65-abd5-f435d16932f0/volumes" Oct 08 06:57:13 crc kubenswrapper[4958]: I1008 06:57:13.838091 4958 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb562d3e4-572e-48d5-9257-4927ef68e988"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb562d3e4-572e-48d5-9257-4927ef68e988] : Timed out while waiting for systemd to remove kubepods-besteffort-podb562d3e4_572e_48d5_9257_4927ef68e988.slice" Oct 08 06:57:13 crc kubenswrapper[4958]: E1008 06:57:13.838164 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb562d3e4-572e-48d5-9257-4927ef68e988] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb562d3e4-572e-48d5-9257-4927ef68e988] : Timed out while waiting for systemd to remove kubepods-besteffort-podb562d3e4_572e_48d5_9257_4927ef68e988.slice" pod="openstack/barbican0295-account-delete-dzz74" podUID="b562d3e4-572e-48d5-9257-4927ef68e988" Oct 08 06:57:14 crc kubenswrapper[4958]: I1008 06:57:14.454698 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican0295-account-delete-dzz74" Oct 08 06:57:14 crc kubenswrapper[4958]: I1008 06:57:14.487449 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican0295-account-delete-dzz74"] Oct 08 06:57:14 crc kubenswrapper[4958]: I1008 06:57:14.498909 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican0295-account-delete-dzz74"] Oct 08 06:57:15 crc kubenswrapper[4958]: I1008 06:57:15.592209 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b562d3e4-572e-48d5-9257-4927ef68e988" path="/var/lib/kubelet/pods/b562d3e4-572e-48d5-9257-4927ef68e988/volumes" Oct 08 06:57:31 crc kubenswrapper[4958]: I1008 06:57:31.530319 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-845b57c9c7-mn8f6" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.552541 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5m26"] Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.553826 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.553857 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.553898 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4303db42-6842-4d26-bf89-79755d0db57d" containerName="kube-state-metrics" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.553914 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4303db42-6842-4d26-bf89-79755d0db57d" containerName="kube-state-metrics" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.553927 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f633ea-236e-46e7-a780-a9912bbd2c91" containerName="keystone-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.553943 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f633ea-236e-46e7-a780-a9912bbd2c91" containerName="keystone-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554007 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554027 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554054 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="setup-container" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554072 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="setup-container" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554085 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="swift-recon-cron" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554100 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="swift-recon-cron" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554135 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554151 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554184 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971bc9c-45ee-4ccb-aef5-290f51ac13ba" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554201 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971bc9c-45ee-4ccb-aef5-290f51ac13ba" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554232 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554275 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554305 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554321 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554346 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554361 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554391 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab49352-e790-47df-a1c8-f1b74e2a0134" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554408 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab49352-e790-47df-a1c8-f1b74e2a0134" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554434 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-reaper" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554449 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-reaper" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554482 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-updater" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554497 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-updater" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554522 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-metadata" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554537 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-metadata" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554557 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554573 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554601 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="setup-container" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554616 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="setup-container" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554645 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerName="ovn-controller" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554661 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerName="ovn-controller" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554691 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerName="galera" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554707 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerName="galera" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554739 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554755 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554778 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="sg-core" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554793 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="sg-core" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554826 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerName="mysql-bootstrap" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554843 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerName="mysql-bootstrap" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554860 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="rabbitmq" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554875 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="rabbitmq" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554895 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554912 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.554931 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-central-agent" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.554981 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-central-agent" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555013 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22a5d6b-9ca7-4f30-b997-e28ae554a8be" containerName="nova-scheduler-scheduler" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555029 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22a5d6b-9ca7-4f30-b997-e28ae554a8be" containerName="nova-scheduler-scheduler" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555053 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555068 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-server" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555100 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555115 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555137 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555153 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555181 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555197 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555226 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555241 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-server" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555260 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555277 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555310 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555326 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555348 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555364 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555397 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="probe" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555412 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="probe" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555431 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555446 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-server" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555468 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerName="nova-cell0-conductor-conductor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555485 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerName="nova-cell0-conductor-conductor" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555518 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="proxy-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555534 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="proxy-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555561 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312426b0-8fb6-48ad-ba99-79b87cfcac38" containerName="memcached" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555576 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="312426b0-8fb6-48ad-ba99-79b87cfcac38" containerName="memcached" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555594 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555609 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555642 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555658 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555676 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555692 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555718 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-expirer" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555735 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-expirer" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555766 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555809 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555832 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerName="init" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555847 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerName="init" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555862 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="ovn-northd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555878 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="ovn-northd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.555909 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-updater" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.555925 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-updater" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556151 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32b7167-96c2-4cf2-b330-54562c181940" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556179 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32b7167-96c2-4cf2-b330-54562c181940" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556201 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="rabbitmq" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556217 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="rabbitmq" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556245 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556264 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556285 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50505fb3-8aa5-43de-a8f1-617501e46822" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556301 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="50505fb3-8aa5-43de-a8f1-617501e46822" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556320 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556337 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-server" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556362 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556377 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556410 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b562d3e4-572e-48d5-9257-4927ef68e988" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556426 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b562d3e4-572e-48d5-9257-4927ef68e988" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556442 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556459 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556478 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556494 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556514 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556529 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556558 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556574 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556594 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server-init" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556610 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server-init" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556627 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556644 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556673 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerName="galera" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556690 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerName="galera" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556719 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="ovsdbserver-nb" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556735 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="ovsdbserver-nb" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556760 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerName="dnsmasq-dns" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556777 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerName="dnsmasq-dns" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556798 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-notification-agent" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556815 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-notification-agent" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556841 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556857 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556891 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556908 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.556935 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.556987 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-api" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557023 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557040 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557067 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557083 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557112 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="cinder-scheduler" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557130 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="cinder-scheduler" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557153 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557169 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557194 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" containerName="nova-cell1-conductor-conductor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557209 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" containerName="nova-cell1-conductor-conductor" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557239 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557255 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557285 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="rsync" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557301 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="rsync" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557320 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557335 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557350 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557367 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557396 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557413 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557440 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="ovsdbserver-sb" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557456 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="ovsdbserver-sb" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557482 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557496 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557524 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c106a441-1a40-4bee-9317-b6957f8a6c94" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557538 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c106a441-1a40-4bee-9317-b6957f8a6c94" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557557 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655026ad-69aa-4867-8fc8-165d6e801ad0" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557573 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="655026ad-69aa-4867-8fc8-165d6e801ad0" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: E1008 06:57:43.557596 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerName="mysql-bootstrap" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.557611 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerName="mysql-bootstrap" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558042 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="ovsdbserver-nb" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558080 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="ovsdbserver-sb" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558099 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab49352-e790-47df-a1c8-f1b74e2a0134" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558117 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558147 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558178 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-reaper" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558205 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558224 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f196852b-bfdf-43dd-9579-3ecd8601e7bf" containerName="nova-cell1-conductor-conductor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558297 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="swift-recon-cron" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558324 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df9f64e-4e5a-4dca-87d5-530e1c19a9ac" containerName="barbican-keystone-listener" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558341 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f285e309-c3e6-42ce-9f95-8302079cfd71" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558369 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558389 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558415 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4303db42-6842-4d26-bf89-79755d0db57d" containerName="kube-state-metrics" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558440 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558465 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558494 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558513 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558533 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-expirer" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558561 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558587 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-updater" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558619 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558643 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558663 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="50505fb3-8aa5-43de-a8f1-617501e46822" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558686 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558706 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="312426b0-8fb6-48ad-ba99-79b87cfcac38" containerName="memcached" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558728 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558753 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-central-agent" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558781 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558812 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovsdb-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558836 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558866 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="07431828-0ed3-42a8-9c9c-fdcdb98c854b" containerName="glance-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558887 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f633ea-236e-46e7-a780-a9912bbd2c91" containerName="keystone-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558940 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1091f62d-2fa6-4b93-87ce-8c0fbcc23987" containerName="galera" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.558997 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b1a97b-b0eb-4283-bcf6-7ec3f45b6c5f" containerName="nova-cell0-conductor-conductor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559018 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="sg-core" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559038 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fd75a5-0719-4f7b-9103-d76319815535" containerName="nova-metadata-metadata" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559056 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559075 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f688514-2336-4067-bb66-8bc690a2da30" containerName="proxy-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559098 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5971bc9c-45ee-4ccb-aef5-290f51ac13ba" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559131 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e48003-108c-4de3-be7e-81946556e25e" containerName="placement-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559149 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c106a441-1a40-4bee-9317-b6957f8a6c94" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559177 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="container-updater" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559205 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="655026ad-69aa-4867-8fc8-165d6e801ad0" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559231 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="probe" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559248 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32b7167-96c2-4cf2-b330-54562c181940" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559275 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c32c16-960f-4a65-abd5-f435d16932f0" containerName="barbican-keystone-listener" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559297 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b4974ca-181d-4e2e-b4c4-0c425f86f0ef" containerName="barbican-worker" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559316 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac8efb-5e1d-4b4c-beba-7d287a699044" containerName="cinder-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559337 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1279cb-5369-4347-9fc9-d598103536a9" containerName="ovn-northd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559356 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559376 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559394 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="94872b33-329c-42ca-9d90-09c6950dfd83" containerName="barbican-api" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559418 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-replicator" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559448 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559465 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afba053-ce3d-4e27-a16f-35dff8f0407c" containerName="galera" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559490 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="rsync" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559509 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b562d3e4-572e-48d5-9257-4927ef68e988" containerName="mariadb-account-delete" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559529 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65112aa3-67bb-47a4-bc56-241ce61eff7b" containerName="nova-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559549 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f931d71-9f8f-4755-a793-ca326e423199" containerName="rabbitmq" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559572 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="object-server" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559592 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6269a952-e10d-442f-8d9f-135e16244e83" containerName="openstack-network-exporter" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559610 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b1534-27bc-4d6d-be46-1ea5689c290f" containerName="rabbitmq" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559636 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bffcae1a-1024-41fe-95f7-c20090e1a4fa" containerName="ovs-vswitchd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559651 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c45aa0e-9caf-42e6-bfbb-59c802d81c98" containerName="account-auditor" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559679 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="ceilometer-notification-agent" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559702 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22a5d6b-9ca7-4f30-b997-e28ae554a8be" containerName="nova-scheduler-scheduler" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559725 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559746 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90612def-876b-4ae6-88e6-7f3de02515e6" containerName="proxy-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559765 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d77fbe-4e70-428d-92b3-926fb7f5547e" containerName="cinder-scheduler" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559793 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1792ed77-ae79-44a5-80a3-7a67a8031d75" containerName="barbican-api-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559822 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c6fa9b-cd66-4762-a0b9-3a5fd8c026b7" containerName="ovn-controller" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559844 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="234cbd06-f8de-4d4f-a510-8dc7e5d9db93" containerName="glance-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559860 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a720c647-fed0-4c66-83ed-ab4c03fc68ba" containerName="neutron-httpd" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559880 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c32a9e6-d21d-422f-914d-c3e9de16a0d5" containerName="barbican-worker-log" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.559907 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e3061c-ca6e-43c6-ba1d-2520f28142c6" containerName="dnsmasq-dns" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.562799 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.571078 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5m26"] Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.747843 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wb7c\" (UniqueName: \"kubernetes.io/projected/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-kube-api-access-9wb7c\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.747910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-utilities\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.747937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-catalog-content\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.849367 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wb7c\" (UniqueName: \"kubernetes.io/projected/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-kube-api-access-9wb7c\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.849465 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-utilities\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.849503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-catalog-content\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.850307 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-utilities\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.850422 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-catalog-content\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.884131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wb7c\" (UniqueName: \"kubernetes.io/projected/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-kube-api-access-9wb7c\") pod \"community-operators-p5m26\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:43 crc kubenswrapper[4958]: I1008 06:57:43.890106 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:44 crc kubenswrapper[4958]: I1008 06:57:44.388782 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5m26"] Oct 08 06:57:44 crc kubenswrapper[4958]: I1008 06:57:44.854261 4958 generic.go:334] "Generic (PLEG): container finished" podID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerID="b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf" exitCode=0 Oct 08 06:57:44 crc kubenswrapper[4958]: I1008 06:57:44.854329 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerDied","Data":"b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf"} Oct 08 06:57:44 crc kubenswrapper[4958]: I1008 06:57:44.854379 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerStarted","Data":"3845bf119ec11c3c7b0a1ef9aadc8cfac6739bdfb5b26b58afebb81e3a5244eb"} Oct 08 06:57:44 crc kubenswrapper[4958]: I1008 06:57:44.860510 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 06:57:45 crc kubenswrapper[4958]: I1008 06:57:45.879469 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerStarted","Data":"45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f"} Oct 08 06:57:46 crc kubenswrapper[4958]: I1008 06:57:46.892517 4958 generic.go:334] "Generic (PLEG): container finished" podID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerID="45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f" exitCode=0 Oct 08 06:57:46 crc kubenswrapper[4958]: I1008 06:57:46.892609 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerDied","Data":"45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f"} Oct 08 06:57:47 crc kubenswrapper[4958]: I1008 06:57:47.909061 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerStarted","Data":"9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab"} Oct 08 06:57:47 crc kubenswrapper[4958]: I1008 06:57:47.946503 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5m26" podStartSLOduration=2.455749894 podStartE2EDuration="4.946476056s" podCreationTimestamp="2025-10-08 06:57:43 +0000 UTC" firstStartedPulling="2025-10-08 06:57:44.856310995 +0000 UTC m=+1407.986003636" lastFinishedPulling="2025-10-08 06:57:47.347037157 +0000 UTC m=+1410.476729798" observedRunningTime="2025-10-08 06:57:47.933891896 +0000 UTC m=+1411.063584577" watchObservedRunningTime="2025-10-08 06:57:47.946476056 +0000 UTC m=+1411.076168697" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.333507 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wz78l"] Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.370895 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.417267 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wz78l"] Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.530757 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9rs\" (UniqueName: \"kubernetes.io/projected/dd6ec601-a92f-4c48-a480-2640318cc807-kube-api-access-2l9rs\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.530875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-utilities\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.530901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-catalog-content\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.631918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9rs\" (UniqueName: \"kubernetes.io/projected/dd6ec601-a92f-4c48-a480-2640318cc807-kube-api-access-2l9rs\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.632060 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-utilities\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.632089 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-catalog-content\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.632540 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-utilities\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.632575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-catalog-content\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.654202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9rs\" (UniqueName: \"kubernetes.io/projected/dd6ec601-a92f-4c48-a480-2640318cc807-kube-api-access-2l9rs\") pod \"certified-operators-wz78l\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:48 crc kubenswrapper[4958]: I1008 06:57:48.726283 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:49 crc kubenswrapper[4958]: I1008 06:57:49.235987 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wz78l"] Oct 08 06:57:49 crc kubenswrapper[4958]: I1008 06:57:49.932762 4958 generic.go:334] "Generic (PLEG): container finished" podID="dd6ec601-a92f-4c48-a480-2640318cc807" containerID="09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81" exitCode=0 Oct 08 06:57:49 crc kubenswrapper[4958]: I1008 06:57:49.932842 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerDied","Data":"09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81"} Oct 08 06:57:49 crc kubenswrapper[4958]: I1008 06:57:49.933245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerStarted","Data":"d7fb41c598fc2265cf8db2211cc1ce29a8980ec87ac0a0ac1d7383697c34b286"} Oct 08 06:57:50 crc kubenswrapper[4958]: I1008 06:57:50.944177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerStarted","Data":"1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3"} Oct 08 06:57:51 crc kubenswrapper[4958]: I1008 06:57:51.960139 4958 generic.go:334] "Generic (PLEG): container finished" podID="dd6ec601-a92f-4c48-a480-2640318cc807" containerID="1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3" exitCode=0 Oct 08 06:57:51 crc kubenswrapper[4958]: I1008 06:57:51.960236 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerDied","Data":"1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3"} Oct 08 06:57:52 crc kubenswrapper[4958]: I1008 06:57:52.971836 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerStarted","Data":"9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554"} Oct 08 06:57:52 crc kubenswrapper[4958]: I1008 06:57:52.999035 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wz78l" podStartSLOduration=2.524082423 podStartE2EDuration="4.99901906s" podCreationTimestamp="2025-10-08 06:57:48 +0000 UTC" firstStartedPulling="2025-10-08 06:57:49.935557009 +0000 UTC m=+1413.065249650" lastFinishedPulling="2025-10-08 06:57:52.410493656 +0000 UTC m=+1415.540186287" observedRunningTime="2025-10-08 06:57:52.998051674 +0000 UTC m=+1416.127744275" watchObservedRunningTime="2025-10-08 06:57:52.99901906 +0000 UTC m=+1416.128711661" Oct 08 06:57:53 crc kubenswrapper[4958]: I1008 06:57:53.890341 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:53 crc kubenswrapper[4958]: I1008 06:57:53.890925 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:53 crc kubenswrapper[4958]: I1008 06:57:53.960278 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:54 crc kubenswrapper[4958]: I1008 06:57:54.045568 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.121643 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5m26"] Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.122002 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p5m26" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="registry-server" containerID="cri-o://9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab" gracePeriod=2 Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.855720 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.973376 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wb7c\" (UniqueName: \"kubernetes.io/projected/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-kube-api-access-9wb7c\") pod \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.973504 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-utilities\") pod \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.973529 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-catalog-content\") pod \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\" (UID: \"c384ca5a-38ae-467c-8ac0-a5a646c6a93a\") " Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.976408 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-utilities" (OuterVolumeSpecName: "utilities") pod "c384ca5a-38ae-467c-8ac0-a5a646c6a93a" (UID: "c384ca5a-38ae-467c-8ac0-a5a646c6a93a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:57:56 crc kubenswrapper[4958]: I1008 06:57:56.982401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-kube-api-access-9wb7c" (OuterVolumeSpecName: "kube-api-access-9wb7c") pod "c384ca5a-38ae-467c-8ac0-a5a646c6a93a" (UID: "c384ca5a-38ae-467c-8ac0-a5a646c6a93a"). InnerVolumeSpecName "kube-api-access-9wb7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.015274 4958 generic.go:334] "Generic (PLEG): container finished" podID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerID="9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab" exitCode=0 Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.015332 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerDied","Data":"9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab"} Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.015373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5m26" event={"ID":"c384ca5a-38ae-467c-8ac0-a5a646c6a93a","Type":"ContainerDied","Data":"3845bf119ec11c3c7b0a1ef9aadc8cfac6739bdfb5b26b58afebb81e3a5244eb"} Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.015405 4958 scope.go:117] "RemoveContainer" containerID="9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.015594 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5m26" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.058826 4958 scope.go:117] "RemoveContainer" containerID="45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.060351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c384ca5a-38ae-467c-8ac0-a5a646c6a93a" (UID: "c384ca5a-38ae-467c-8ac0-a5a646c6a93a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.075732 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.075785 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.075807 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wb7c\" (UniqueName: \"kubernetes.io/projected/c384ca5a-38ae-467c-8ac0-a5a646c6a93a-kube-api-access-9wb7c\") on node \"crc\" DevicePath \"\"" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.103123 4958 scope.go:117] "RemoveContainer" containerID="b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.130560 4958 scope.go:117] "RemoveContainer" containerID="9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab" Oct 08 06:57:57 crc kubenswrapper[4958]: E1008 06:57:57.131374 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab\": container with ID starting with 9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab not found: ID does not exist" containerID="9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.131410 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab"} err="failed to get container status \"9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab\": rpc error: code = NotFound desc = could not find container \"9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab\": container with ID starting with 9e9794404e596d2e194208fda5a337130c4cf1d14f76ba802121eaa5b10344ab not found: ID does not exist" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.131433 4958 scope.go:117] "RemoveContainer" containerID="45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f" Oct 08 06:57:57 crc kubenswrapper[4958]: E1008 06:57:57.131915 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f\": container with ID starting with 45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f not found: ID does not exist" containerID="45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.131936 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f"} err="failed to get container status \"45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f\": rpc error: code = NotFound desc = could not find container \"45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f\": container with ID starting with 45bb4a137dc1420bc89fed7ef4c738e36483637db35936a7e74b993f4ed52c2f not found: ID does not exist" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.132036 4958 scope.go:117] "RemoveContainer" containerID="b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf" Oct 08 06:57:57 crc kubenswrapper[4958]: E1008 06:57:57.132473 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf\": container with ID starting with b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf not found: ID does not exist" containerID="b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.132500 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf"} err="failed to get container status \"b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf\": rpc error: code = NotFound desc = could not find container \"b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf\": container with ID starting with b01a5aa428d9f5e5f002604cd92e6fef0869d63b89d6527b02ca54fd701b83bf not found: ID does not exist" Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.383311 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5m26"] Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.394983 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p5m26"] Oct 08 06:57:57 crc kubenswrapper[4958]: I1008 06:57:57.593712 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" path="/var/lib/kubelet/pods/c384ca5a-38ae-467c-8ac0-a5a646c6a93a/volumes" Oct 08 06:57:58 crc kubenswrapper[4958]: I1008 06:57:58.727058 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:58 crc kubenswrapper[4958]: I1008 06:57:58.727520 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:58 crc kubenswrapper[4958]: I1008 06:57:58.804616 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:57:59 crc kubenswrapper[4958]: I1008 06:57:59.112929 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:58:00 crc kubenswrapper[4958]: I1008 06:58:00.320741 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wz78l"] Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.057766 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wz78l" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="registry-server" containerID="cri-o://9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554" gracePeriod=2 Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.585678 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.754802 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-catalog-content\") pod \"dd6ec601-a92f-4c48-a480-2640318cc807\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.754939 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9rs\" (UniqueName: \"kubernetes.io/projected/dd6ec601-a92f-4c48-a480-2640318cc807-kube-api-access-2l9rs\") pod \"dd6ec601-a92f-4c48-a480-2640318cc807\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.755032 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-utilities\") pod \"dd6ec601-a92f-4c48-a480-2640318cc807\" (UID: \"dd6ec601-a92f-4c48-a480-2640318cc807\") " Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.756319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-utilities" (OuterVolumeSpecName: "utilities") pod "dd6ec601-a92f-4c48-a480-2640318cc807" (UID: "dd6ec601-a92f-4c48-a480-2640318cc807"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.756851 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.764340 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6ec601-a92f-4c48-a480-2640318cc807-kube-api-access-2l9rs" (OuterVolumeSpecName: "kube-api-access-2l9rs") pod "dd6ec601-a92f-4c48-a480-2640318cc807" (UID: "dd6ec601-a92f-4c48-a480-2640318cc807"). InnerVolumeSpecName "kube-api-access-2l9rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.858306 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9rs\" (UniqueName: \"kubernetes.io/projected/dd6ec601-a92f-4c48-a480-2640318cc807-kube-api-access-2l9rs\") on node \"crc\" DevicePath \"\"" Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.895520 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd6ec601-a92f-4c48-a480-2640318cc807" (UID: "dd6ec601-a92f-4c48-a480-2640318cc807"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 06:58:01 crc kubenswrapper[4958]: I1008 06:58:01.960353 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6ec601-a92f-4c48-a480-2640318cc807-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.073391 4958 generic.go:334] "Generic (PLEG): container finished" podID="dd6ec601-a92f-4c48-a480-2640318cc807" containerID="9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554" exitCode=0 Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.073439 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerDied","Data":"9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554"} Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.073471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz78l" event={"ID":"dd6ec601-a92f-4c48-a480-2640318cc807","Type":"ContainerDied","Data":"d7fb41c598fc2265cf8db2211cc1ce29a8980ec87ac0a0ac1d7383697c34b286"} Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.073493 4958 scope.go:117] "RemoveContainer" containerID="9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.074140 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz78l" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.101842 4958 scope.go:117] "RemoveContainer" containerID="1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.121629 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wz78l"] Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.125766 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wz78l"] Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.146237 4958 scope.go:117] "RemoveContainer" containerID="09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.177150 4958 scope.go:117] "RemoveContainer" containerID="9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554" Oct 08 06:58:02 crc kubenswrapper[4958]: E1008 06:58:02.177432 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554\": container with ID starting with 9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554 not found: ID does not exist" containerID="9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.177465 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554"} err="failed to get container status \"9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554\": rpc error: code = NotFound desc = could not find container \"9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554\": container with ID starting with 9582c4aaf3ae9123c40f7191bc036f4df722d2b1c11c2d10479381ec3e7bc554 not found: ID does not exist" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.177488 4958 scope.go:117] "RemoveContainer" containerID="1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3" Oct 08 06:58:02 crc kubenswrapper[4958]: E1008 06:58:02.177804 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3\": container with ID starting with 1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3 not found: ID does not exist" containerID="1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.177823 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3"} err="failed to get container status \"1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3\": rpc error: code = NotFound desc = could not find container \"1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3\": container with ID starting with 1570763321e30c1d6b4e6bc794c2dddecb1cfe58d177adbca6da92e0d31524b3 not found: ID does not exist" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.177835 4958 scope.go:117] "RemoveContainer" containerID="09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81" Oct 08 06:58:02 crc kubenswrapper[4958]: E1008 06:58:02.178047 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81\": container with ID starting with 09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81 not found: ID does not exist" containerID="09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81" Oct 08 06:58:02 crc kubenswrapper[4958]: I1008 06:58:02.178069 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81"} err="failed to get container status \"09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81\": rpc error: code = NotFound desc = could not find container \"09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81\": container with ID starting with 09373c27268e83c52e3c83cb7e574e6fcbb05e93a83b54d2bce4e0b7e8bc5e81 not found: ID does not exist" Oct 08 06:58:03 crc kubenswrapper[4958]: I1008 06:58:03.592497 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" path="/var/lib/kubelet/pods/dd6ec601-a92f-4c48-a480-2640318cc807/volumes" Oct 08 06:58:06 crc kubenswrapper[4958]: I1008 06:58:06.844873 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:58:06 crc kubenswrapper[4958]: I1008 06:58:06.845402 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.588859 4958 scope.go:117] "RemoveContainer" containerID="748589410a41fce79276e353489590fafe207883c70c06e791605b61ddfc888d" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.637154 4958 scope.go:117] "RemoveContainer" containerID="dd7ce193b3de687f137afb222a80107abe63702c573cafc0830aa7ef4d6a47c7" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.673316 4958 scope.go:117] "RemoveContainer" containerID="f2372075b46c0743cc70633e7c0ea3c807a4c5c06b80f014e70655df4dbd2021" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.713058 4958 scope.go:117] "RemoveContainer" containerID="1c5177a5e7f56e770d75063c504837fbc10992e43fba0322e87a21a8777d8212" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.754412 4958 scope.go:117] "RemoveContainer" containerID="feabc94a85f79793c9a91938bc1f0e46c44fc952c2688f0252f2eb939bc20cd7" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.813209 4958 scope.go:117] "RemoveContainer" containerID="1b5a73ff987b9a0e1dcfa0504c77a4586a7bcf1a30cf08135d1117c4cb4997e8" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.841309 4958 scope.go:117] "RemoveContainer" containerID="ece93e76e93080675a52102e60d92b1ae621efbede31e3c0111769ac4d2d73cc" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.881345 4958 scope.go:117] "RemoveContainer" containerID="0ccd56a1958ce0154664df950fcf2878b9a906bd10c575f921169414c7f923ef" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.908810 4958 scope.go:117] "RemoveContainer" containerID="4590c8a910fba3039286cd5008abed771287df6ab7eec3b685f1fb0204c2403e" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.934202 4958 scope.go:117] "RemoveContainer" containerID="3646ca521aff130be148cea07cbf7a3bcb5ac1b446d8d33e3d9792d31ce394a1" Oct 08 06:58:33 crc kubenswrapper[4958]: I1008 06:58:33.973379 4958 scope.go:117] "RemoveContainer" containerID="4f1396c7a4ac66765f9854b77cee40014b836f5ef211b398af4a422dace45ab9" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.001794 4958 scope.go:117] "RemoveContainer" containerID="09a097e49f1b05aef7bc207ccb6b6a019ea4d56e6334957284e3d1938ade8820" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.034003 4958 scope.go:117] "RemoveContainer" containerID="153a18a4416f06d0635bfb0680262cdb03c0d67457a5650a0a56b35b8c9c2dd7" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.082300 4958 scope.go:117] "RemoveContainer" containerID="ec623950d1af2bcac81bd3887205674263328f3f7bf9108b25835f232bfdd6aa" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.108568 4958 scope.go:117] "RemoveContainer" containerID="f61ab36dcee33b75ccc3971394dfde5a04ad577077ba1fea21e678472718c630" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.137237 4958 scope.go:117] "RemoveContainer" containerID="d783604610a9189f256be5e1720df1527b094750fa34cd27977a42f5118d7d5b" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.166734 4958 scope.go:117] "RemoveContainer" containerID="812d87e862c8d47a60a50d5e027b634ae93f921abb1f5a6989f2c0dcf3164616" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.208487 4958 scope.go:117] "RemoveContainer" containerID="d2b992a17d90846839e6e707ee029a59a8ab08f95683d9d6e29937a29ffe21a0" Oct 08 06:58:34 crc kubenswrapper[4958]: I1008 06:58:34.240434 4958 scope.go:117] "RemoveContainer" containerID="a3a078c249e2a4ee019144d666f75b0cec5561a5446b80136b45678bc868d40f" Oct 08 06:58:36 crc kubenswrapper[4958]: I1008 06:58:36.845346 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:58:36 crc kubenswrapper[4958]: I1008 06:58:36.846089 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:59:06 crc kubenswrapper[4958]: I1008 06:59:06.845918 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 06:59:06 crc kubenswrapper[4958]: I1008 06:59:06.847049 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 06:59:06 crc kubenswrapper[4958]: I1008 06:59:06.847180 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 06:59:06 crc kubenswrapper[4958]: I1008 06:59:06.848524 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51ab079a19b3cda7fa6717a3a6e40663eab6249d65864efbeac7f7030917e815"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 06:59:06 crc kubenswrapper[4958]: I1008 06:59:06.848668 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://51ab079a19b3cda7fa6717a3a6e40663eab6249d65864efbeac7f7030917e815" gracePeriod=600 Oct 08 06:59:07 crc kubenswrapper[4958]: I1008 06:59:07.887236 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="51ab079a19b3cda7fa6717a3a6e40663eab6249d65864efbeac7f7030917e815" exitCode=0 Oct 08 06:59:07 crc kubenswrapper[4958]: I1008 06:59:07.887342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"51ab079a19b3cda7fa6717a3a6e40663eab6249d65864efbeac7f7030917e815"} Oct 08 06:59:07 crc kubenswrapper[4958]: I1008 06:59:07.888201 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137"} Oct 08 06:59:07 crc kubenswrapper[4958]: I1008 06:59:07.888240 4958 scope.go:117] "RemoveContainer" containerID="5b130d69a8aa9d1feb209f9a1f16b0b50db741e0776429ac28dc669dd948e901" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.707226 4958 scope.go:117] "RemoveContainer" containerID="3fa90dad8a7721798b2697fcc6a430121bab915e5b47d28f33472097cb2d2201" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.743867 4958 scope.go:117] "RemoveContainer" containerID="5ac0c0d9c6a789604245fcea706bdcea4c122528641faf5d1ca953ccc32728a2" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.802597 4958 scope.go:117] "RemoveContainer" containerID="b6c7d3a811f444cf71be03e845aacb115c373eec9e132255314c4364810bcbd9" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.846518 4958 scope.go:117] "RemoveContainer" containerID="d527a630ea4d61d6eb0f13c8d4c75714d192decdc6d656ce04fc555e290f1155" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.898987 4958 scope.go:117] "RemoveContainer" containerID="32865b682afefc99acaf2a94aaeb014f9a76945d17900abfa265c8b8473f5083" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.927487 4958 scope.go:117] "RemoveContainer" containerID="f34eb08efa67b475f0df3b642b635f5ede09883090a37f940dd4419c81dbeb41" Oct 08 06:59:34 crc kubenswrapper[4958]: I1008 06:59:34.962679 4958 scope.go:117] "RemoveContainer" containerID="ca95eff0d7855e4b9391469f340662ffafa4e08cc2ef525ae33e4fa92980145a" Oct 08 06:59:35 crc kubenswrapper[4958]: I1008 06:59:35.019378 4958 scope.go:117] "RemoveContainer" containerID="4312838554efe193aa4b07d7c2a6ac19aed00f4fb393b9586861c3822e7c76fc" Oct 08 06:59:35 crc kubenswrapper[4958]: I1008 06:59:35.060967 4958 scope.go:117] "RemoveContainer" containerID="e59c531bcd48e98457037c11f2f6e70713cbace1cbc0c76772bbc6e06ba70b54" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.152589 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw"] Oct 08 07:00:00 crc kubenswrapper[4958]: E1008 07:00:00.153793 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="registry-server" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.153816 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="registry-server" Oct 08 07:00:00 crc kubenswrapper[4958]: E1008 07:00:00.153845 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="extract-content" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.153859 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="extract-content" Oct 08 07:00:00 crc kubenswrapper[4958]: E1008 07:00:00.153878 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="registry-server" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.153893 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="registry-server" Oct 08 07:00:00 crc kubenswrapper[4958]: E1008 07:00:00.153920 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="extract-content" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.153932 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="extract-content" Oct 08 07:00:00 crc kubenswrapper[4958]: E1008 07:00:00.153998 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="extract-utilities" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.154017 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="extract-utilities" Oct 08 07:00:00 crc kubenswrapper[4958]: E1008 07:00:00.154041 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="extract-utilities" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.154054 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="extract-utilities" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.154308 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6ec601-a92f-4c48-a480-2640318cc807" containerName="registry-server" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.154351 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c384ca5a-38ae-467c-8ac0-a5a646c6a93a" containerName="registry-server" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.155184 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.157500 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.157596 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.181589 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw"] Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.238534 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8vkn\" (UniqueName: \"kubernetes.io/projected/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-kube-api-access-d8vkn\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.238610 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-config-volume\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.238705 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-secret-volume\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.340678 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-secret-volume\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.340761 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8vkn\" (UniqueName: \"kubernetes.io/projected/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-kube-api-access-d8vkn\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.340866 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-config-volume\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.342462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-config-volume\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.351028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-secret-volume\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.361396 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8vkn\" (UniqueName: \"kubernetes.io/projected/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-kube-api-access-d8vkn\") pod \"collect-profiles-29331780-9vdpw\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.505093 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:00 crc kubenswrapper[4958]: I1008 07:00:00.847913 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw"] Oct 08 07:00:01 crc kubenswrapper[4958]: I1008 07:00:01.486156 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" containerID="1c119abcc4768138ea47c3d79c0ea306ac6280fd5806151f0b863b74fbb8c2f7" exitCode=0 Oct 08 07:00:01 crc kubenswrapper[4958]: I1008 07:00:01.486274 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" event={"ID":"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b","Type":"ContainerDied","Data":"1c119abcc4768138ea47c3d79c0ea306ac6280fd5806151f0b863b74fbb8c2f7"} Oct 08 07:00:01 crc kubenswrapper[4958]: I1008 07:00:01.486631 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" event={"ID":"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b","Type":"ContainerStarted","Data":"83b44094a78f5c5637980e58c4c41eb4e6c1fa7c9d604e9a1e5d2c38e86585e7"} Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.901509 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.984140 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-config-volume\") pod \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.984215 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8vkn\" (UniqueName: \"kubernetes.io/projected/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-kube-api-access-d8vkn\") pod \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.984321 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-secret-volume\") pod \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\" (UID: \"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b\") " Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.984990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" (UID: "c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.990505 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-kube-api-access-d8vkn" (OuterVolumeSpecName: "kube-api-access-d8vkn") pod "c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" (UID: "c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b"). InnerVolumeSpecName "kube-api-access-d8vkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:00:02 crc kubenswrapper[4958]: I1008 07:00:02.990614 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" (UID: "c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 07:00:03 crc kubenswrapper[4958]: I1008 07:00:03.086267 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:00:03 crc kubenswrapper[4958]: I1008 07:00:03.086298 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8vkn\" (UniqueName: \"kubernetes.io/projected/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-kube-api-access-d8vkn\") on node \"crc\" DevicePath \"\"" Oct 08 07:00:03 crc kubenswrapper[4958]: I1008 07:00:03.086311 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:00:03 crc kubenswrapper[4958]: I1008 07:00:03.509233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" event={"ID":"c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b","Type":"ContainerDied","Data":"83b44094a78f5c5637980e58c4c41eb4e6c1fa7c9d604e9a1e5d2c38e86585e7"} Oct 08 07:00:03 crc kubenswrapper[4958]: I1008 07:00:03.509294 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b44094a78f5c5637980e58c4c41eb4e6c1fa7c9d604e9a1e5d2c38e86585e7" Oct 08 07:00:03 crc kubenswrapper[4958]: I1008 07:00:03.509310 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.303053 4958 scope.go:117] "RemoveContainer" containerID="a4d7a2c0b6159dc1881b855e89b4f08a8c4529f260286c802640dac3d986e08a" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.337804 4958 scope.go:117] "RemoveContainer" containerID="311e69da329a81b4b20d7020272df4e6a19f16a07d1eae5dbd73b8bf3a8f4450" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.375069 4958 scope.go:117] "RemoveContainer" containerID="85772ea9d58e1f9499c1692b5187586c59aae8867c70dc4043d9d818fc5d27c5" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.401874 4958 scope.go:117] "RemoveContainer" containerID="9fb3d91b7b2f5366677ef6e3ddcbd73638676fa5edbe579060028418b61d5a79" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.423656 4958 scope.go:117] "RemoveContainer" containerID="d1f6b1da427fff30761b599d2a730ce90f82550f72826fba714f3fa780f61642" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.452362 4958 scope.go:117] "RemoveContainer" containerID="3ad1bac1f4b3c727bc4312c1f9a444c3a8f983b58148ca3fc6e195b99513859a" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.477653 4958 scope.go:117] "RemoveContainer" containerID="f7d3a2e24e36cb843175ea076666289a75ce06294357e1d7f9f67d7974e2b8a2" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.501006 4958 scope.go:117] "RemoveContainer" containerID="b1552d7b8117bc2448cb0631d84ac39a24f319291ca1fbed35d8d9eb21a01721" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.557733 4958 scope.go:117] "RemoveContainer" containerID="12d6d9a9b735ad1f2113d0bdb61dd29cf346b3ee0134e7b01562c403624ea09d" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.583005 4958 scope.go:117] "RemoveContainer" containerID="0651019d24af5197e8686a21c7aeaa99d9c4dc914b6474540c6458ac12124456" Oct 08 07:00:35 crc kubenswrapper[4958]: I1008 07:00:35.626815 4958 scope.go:117] "RemoveContainer" containerID="34ccb660beacfb2c4633a83b17ca0ed0d4cda30750454a7887a809a2ba97e2e1" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.539268 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-798kf"] Oct 08 07:00:42 crc kubenswrapper[4958]: E1008 07:00:42.540665 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" containerName="collect-profiles" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.540694 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" containerName="collect-profiles" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.541028 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" containerName="collect-profiles" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.542807 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.565027 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-798kf"] Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.652601 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9flpf\" (UniqueName: \"kubernetes.io/projected/b370be30-7dfd-479b-980d-e9d86f693b64-kube-api-access-9flpf\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.652997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-catalog-content\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.653166 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-utilities\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.754326 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-catalog-content\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.754428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-utilities\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.754533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9flpf\" (UniqueName: \"kubernetes.io/projected/b370be30-7dfd-479b-980d-e9d86f693b64-kube-api-access-9flpf\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.755282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-utilities\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.755397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-catalog-content\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.785103 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9flpf\" (UniqueName: \"kubernetes.io/projected/b370be30-7dfd-479b-980d-e9d86f693b64-kube-api-access-9flpf\") pod \"redhat-marketplace-798kf\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:42 crc kubenswrapper[4958]: I1008 07:00:42.872994 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:43 crc kubenswrapper[4958]: I1008 07:00:43.351825 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-798kf"] Oct 08 07:00:43 crc kubenswrapper[4958]: I1008 07:00:43.941324 4958 generic.go:334] "Generic (PLEG): container finished" podID="b370be30-7dfd-479b-980d-e9d86f693b64" containerID="5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc" exitCode=0 Oct 08 07:00:43 crc kubenswrapper[4958]: I1008 07:00:43.941553 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-798kf" event={"ID":"b370be30-7dfd-479b-980d-e9d86f693b64","Type":"ContainerDied","Data":"5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc"} Oct 08 07:00:43 crc kubenswrapper[4958]: I1008 07:00:43.941830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-798kf" event={"ID":"b370be30-7dfd-479b-980d-e9d86f693b64","Type":"ContainerStarted","Data":"72f909290c1cf0206d56e6b3209566774d19ac154938d2c9f5cfc33ae29a2783"} Oct 08 07:00:44 crc kubenswrapper[4958]: I1008 07:00:44.952328 4958 generic.go:334] "Generic (PLEG): container finished" podID="b370be30-7dfd-479b-980d-e9d86f693b64" containerID="8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c" exitCode=0 Oct 08 07:00:44 crc kubenswrapper[4958]: I1008 07:00:44.952385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-798kf" event={"ID":"b370be30-7dfd-479b-980d-e9d86f693b64","Type":"ContainerDied","Data":"8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c"} Oct 08 07:00:45 crc kubenswrapper[4958]: I1008 07:00:45.963818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-798kf" event={"ID":"b370be30-7dfd-479b-980d-e9d86f693b64","Type":"ContainerStarted","Data":"53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3"} Oct 08 07:00:45 crc kubenswrapper[4958]: I1008 07:00:45.989797 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-798kf" podStartSLOduration=2.49194406 podStartE2EDuration="3.989780664s" podCreationTimestamp="2025-10-08 07:00:42 +0000 UTC" firstStartedPulling="2025-10-08 07:00:43.943864347 +0000 UTC m=+1587.073556988" lastFinishedPulling="2025-10-08 07:00:45.441700951 +0000 UTC m=+1588.571393592" observedRunningTime="2025-10-08 07:00:45.98592348 +0000 UTC m=+1589.115616091" watchObservedRunningTime="2025-10-08 07:00:45.989780664 +0000 UTC m=+1589.119473275" Oct 08 07:00:52 crc kubenswrapper[4958]: I1008 07:00:52.873618 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:52 crc kubenswrapper[4958]: I1008 07:00:52.874427 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:52 crc kubenswrapper[4958]: I1008 07:00:52.954025 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:53 crc kubenswrapper[4958]: I1008 07:00:53.090907 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:53 crc kubenswrapper[4958]: I1008 07:00:53.202446 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-798kf"] Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.048148 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-798kf" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="registry-server" containerID="cri-o://53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3" gracePeriod=2 Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.558728 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.692205 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-utilities\") pod \"b370be30-7dfd-479b-980d-e9d86f693b64\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.692476 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9flpf\" (UniqueName: \"kubernetes.io/projected/b370be30-7dfd-479b-980d-e9d86f693b64-kube-api-access-9flpf\") pod \"b370be30-7dfd-479b-980d-e9d86f693b64\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.692534 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-catalog-content\") pod \"b370be30-7dfd-479b-980d-e9d86f693b64\" (UID: \"b370be30-7dfd-479b-980d-e9d86f693b64\") " Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.694214 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-utilities" (OuterVolumeSpecName: "utilities") pod "b370be30-7dfd-479b-980d-e9d86f693b64" (UID: "b370be30-7dfd-479b-980d-e9d86f693b64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.699478 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b370be30-7dfd-479b-980d-e9d86f693b64-kube-api-access-9flpf" (OuterVolumeSpecName: "kube-api-access-9flpf") pod "b370be30-7dfd-479b-980d-e9d86f693b64" (UID: "b370be30-7dfd-479b-980d-e9d86f693b64"). InnerVolumeSpecName "kube-api-access-9flpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.720379 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b370be30-7dfd-479b-980d-e9d86f693b64" (UID: "b370be30-7dfd-479b-980d-e9d86f693b64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.794247 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.794280 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b370be30-7dfd-479b-980d-e9d86f693b64-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:00:55 crc kubenswrapper[4958]: I1008 07:00:55.794293 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9flpf\" (UniqueName: \"kubernetes.io/projected/b370be30-7dfd-479b-980d-e9d86f693b64-kube-api-access-9flpf\") on node \"crc\" DevicePath \"\"" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.063558 4958 generic.go:334] "Generic (PLEG): container finished" podID="b370be30-7dfd-479b-980d-e9d86f693b64" containerID="53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3" exitCode=0 Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.063657 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-798kf" event={"ID":"b370be30-7dfd-479b-980d-e9d86f693b64","Type":"ContainerDied","Data":"53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3"} Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.063781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-798kf" event={"ID":"b370be30-7dfd-479b-980d-e9d86f693b64","Type":"ContainerDied","Data":"72f909290c1cf0206d56e6b3209566774d19ac154938d2c9f5cfc33ae29a2783"} Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.063689 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-798kf" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.063848 4958 scope.go:117] "RemoveContainer" containerID="53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.104375 4958 scope.go:117] "RemoveContainer" containerID="8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.129059 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-798kf"] Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.137366 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-798kf"] Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.141377 4958 scope.go:117] "RemoveContainer" containerID="5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.181821 4958 scope.go:117] "RemoveContainer" containerID="53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3" Oct 08 07:00:56 crc kubenswrapper[4958]: E1008 07:00:56.182596 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3\": container with ID starting with 53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3 not found: ID does not exist" containerID="53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.182645 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3"} err="failed to get container status \"53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3\": rpc error: code = NotFound desc = could not find container \"53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3\": container with ID starting with 53331bc63ed2fa6136534e3242760fff87d6c9f52216607d89d1f2085697c7f3 not found: ID does not exist" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.182683 4958 scope.go:117] "RemoveContainer" containerID="8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c" Oct 08 07:00:56 crc kubenswrapper[4958]: E1008 07:00:56.183326 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c\": container with ID starting with 8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c not found: ID does not exist" containerID="8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.183390 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c"} err="failed to get container status \"8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c\": rpc error: code = NotFound desc = could not find container \"8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c\": container with ID starting with 8f2bf3e41553845255e1ee4d84e1b9d695da209cc24382b811131ace0adeb62c not found: ID does not exist" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.183438 4958 scope.go:117] "RemoveContainer" containerID="5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc" Oct 08 07:00:56 crc kubenswrapper[4958]: E1008 07:00:56.184063 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc\": container with ID starting with 5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc not found: ID does not exist" containerID="5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc" Oct 08 07:00:56 crc kubenswrapper[4958]: I1008 07:00:56.184107 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc"} err="failed to get container status \"5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc\": rpc error: code = NotFound desc = could not find container \"5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc\": container with ID starting with 5775058b38389e88eff16eb749f46f7adc6514e7d4a3f44e46b7124b8e615ddc not found: ID does not exist" Oct 08 07:00:57 crc kubenswrapper[4958]: I1008 07:00:57.607259 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" path="/var/lib/kubelet/pods/b370be30-7dfd-479b-980d-e9d86f693b64/volumes" Oct 08 07:01:35 crc kubenswrapper[4958]: I1008 07:01:35.788111 4958 scope.go:117] "RemoveContainer" containerID="6666c45dd7dc4715d5ebc25e32f2ae319ed58010b5920cf78661acf46c71541b" Oct 08 07:01:35 crc kubenswrapper[4958]: I1008 07:01:35.849823 4958 scope.go:117] "RemoveContainer" containerID="d4bb3873a7309661f2e990983b00bab9efc693a2acac87ec5fd0ffda963953d2" Oct 08 07:01:35 crc kubenswrapper[4958]: I1008 07:01:35.875051 4958 scope.go:117] "RemoveContainer" containerID="8da3c05f804e73d89736d57dce94c00953045d8d97c67b3648b3714dada8f2d9" Oct 08 07:01:35 crc kubenswrapper[4958]: I1008 07:01:35.899257 4958 scope.go:117] "RemoveContainer" containerID="49b47029744e00f851d977897200407e71e2e9e3b92956a658ccc8cd97f69ac5" Oct 08 07:01:35 crc kubenswrapper[4958]: I1008 07:01:35.953189 4958 scope.go:117] "RemoveContainer" containerID="0e19d06b3e926c69b54e43c11dcfc7de9d7a5f8624c873e00af289abda321d5e" Oct 08 07:01:35 crc kubenswrapper[4958]: I1008 07:01:35.982481 4958 scope.go:117] "RemoveContainer" containerID="f90e9a7fc7a74d3104410e62dd42776120239fbd58e43d11e67fb278dc36af7a" Oct 08 07:01:36 crc kubenswrapper[4958]: I1008 07:01:36.013816 4958 scope.go:117] "RemoveContainer" containerID="f2b5c325174ca55a506e0030b61af4ddc1024765cfa6d2acbeb2242623a1ed64" Oct 08 07:01:36 crc kubenswrapper[4958]: I1008 07:01:36.845038 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:01:36 crc kubenswrapper[4958]: I1008 07:01:36.845445 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:02:06 crc kubenswrapper[4958]: I1008 07:02:06.844654 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:02:06 crc kubenswrapper[4958]: I1008 07:02:06.845280 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:02:36 crc kubenswrapper[4958]: I1008 07:02:36.139821 4958 scope.go:117] "RemoveContainer" containerID="a1181e703f68a3ef6c82fdcf0b6021d7e2a37cef956a81ffaba24f68e68ca6fe" Oct 08 07:02:36 crc kubenswrapper[4958]: I1008 07:02:36.844686 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:02:36 crc kubenswrapper[4958]: I1008 07:02:36.845050 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:02:36 crc kubenswrapper[4958]: I1008 07:02:36.845113 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:02:36 crc kubenswrapper[4958]: I1008 07:02:36.845916 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:02:36 crc kubenswrapper[4958]: I1008 07:02:36.846049 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" gracePeriod=600 Oct 08 07:02:36 crc kubenswrapper[4958]: E1008 07:02:36.969991 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:02:37 crc kubenswrapper[4958]: I1008 07:02:37.146068 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" exitCode=0 Oct 08 07:02:37 crc kubenswrapper[4958]: I1008 07:02:37.146135 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137"} Oct 08 07:02:37 crc kubenswrapper[4958]: I1008 07:02:37.146184 4958 scope.go:117] "RemoveContainer" containerID="51ab079a19b3cda7fa6717a3a6e40663eab6249d65864efbeac7f7030917e815" Oct 08 07:02:37 crc kubenswrapper[4958]: I1008 07:02:37.146867 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:02:37 crc kubenswrapper[4958]: E1008 07:02:37.147366 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:02:47 crc kubenswrapper[4958]: I1008 07:02:47.583405 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:02:47 crc kubenswrapper[4958]: E1008 07:02:47.584334 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:02:58 crc kubenswrapper[4958]: I1008 07:02:58.577361 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:02:58 crc kubenswrapper[4958]: E1008 07:02:58.580238 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:03:12 crc kubenswrapper[4958]: I1008 07:03:12.576544 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:03:12 crc kubenswrapper[4958]: E1008 07:03:12.577756 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:03:25 crc kubenswrapper[4958]: I1008 07:03:25.576340 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:03:25 crc kubenswrapper[4958]: E1008 07:03:25.577063 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:03:26 crc kubenswrapper[4958]: I1008 07:03:26.969122 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lnmb"] Oct 08 07:03:26 crc kubenswrapper[4958]: E1008 07:03:26.969679 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="extract-utilities" Oct 08 07:03:26 crc kubenswrapper[4958]: I1008 07:03:26.969707 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="extract-utilities" Oct 08 07:03:26 crc kubenswrapper[4958]: E1008 07:03:26.969737 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="extract-content" Oct 08 07:03:26 crc kubenswrapper[4958]: I1008 07:03:26.969753 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="extract-content" Oct 08 07:03:26 crc kubenswrapper[4958]: E1008 07:03:26.969792 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="registry-server" Oct 08 07:03:26 crc kubenswrapper[4958]: I1008 07:03:26.969807 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="registry-server" Oct 08 07:03:26 crc kubenswrapper[4958]: I1008 07:03:26.970197 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b370be30-7dfd-479b-980d-e9d86f693b64" containerName="registry-server" Oct 08 07:03:26 crc kubenswrapper[4958]: I1008 07:03:26.972537 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:26.986326 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lnmb"] Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.120153 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-catalog-content\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.120227 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq55j\" (UniqueName: \"kubernetes.io/projected/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-kube-api-access-sq55j\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.120284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-utilities\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.221835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-catalog-content\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.221922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq55j\" (UniqueName: \"kubernetes.io/projected/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-kube-api-access-sq55j\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.222055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-utilities\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.222732 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-catalog-content\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.222757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-utilities\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.251838 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq55j\" (UniqueName: \"kubernetes.io/projected/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-kube-api-access-sq55j\") pod \"redhat-operators-6lnmb\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.340792 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:27 crc kubenswrapper[4958]: I1008 07:03:27.605610 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lnmb"] Oct 08 07:03:28 crc kubenswrapper[4958]: I1008 07:03:28.627165 4958 generic.go:334] "Generic (PLEG): container finished" podID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerID="26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49" exitCode=0 Oct 08 07:03:28 crc kubenswrapper[4958]: I1008 07:03:28.627228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lnmb" event={"ID":"65ce3a98-bd91-4c28-a36d-7e29c0f8257b","Type":"ContainerDied","Data":"26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49"} Oct 08 07:03:28 crc kubenswrapper[4958]: I1008 07:03:28.627679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lnmb" event={"ID":"65ce3a98-bd91-4c28-a36d-7e29c0f8257b","Type":"ContainerStarted","Data":"8dc2c60de0ad49156dd24fc635ef0025cb90c42790e9ebb839a5fbe89062db07"} Oct 08 07:03:28 crc kubenswrapper[4958]: I1008 07:03:28.630439 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:03:30 crc kubenswrapper[4958]: I1008 07:03:30.649242 4958 generic.go:334] "Generic (PLEG): container finished" podID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerID="f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34" exitCode=0 Oct 08 07:03:30 crc kubenswrapper[4958]: I1008 07:03:30.649293 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lnmb" event={"ID":"65ce3a98-bd91-4c28-a36d-7e29c0f8257b","Type":"ContainerDied","Data":"f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34"} Oct 08 07:03:31 crc kubenswrapper[4958]: I1008 07:03:31.662547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lnmb" event={"ID":"65ce3a98-bd91-4c28-a36d-7e29c0f8257b","Type":"ContainerStarted","Data":"f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535"} Oct 08 07:03:31 crc kubenswrapper[4958]: I1008 07:03:31.695879 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lnmb" podStartSLOduration=3.235608301 podStartE2EDuration="5.695840557s" podCreationTimestamp="2025-10-08 07:03:26 +0000 UTC" firstStartedPulling="2025-10-08 07:03:28.629184085 +0000 UTC m=+1751.758876726" lastFinishedPulling="2025-10-08 07:03:31.089416341 +0000 UTC m=+1754.219108982" observedRunningTime="2025-10-08 07:03:31.692851667 +0000 UTC m=+1754.822544308" watchObservedRunningTime="2025-10-08 07:03:31.695840557 +0000 UTC m=+1754.825533208" Oct 08 07:03:36 crc kubenswrapper[4958]: I1008 07:03:36.240391 4958 scope.go:117] "RemoveContainer" containerID="b5e769123b59e453ebbad05152ae521b4e2607deec0125453ab24a8653ed80fe" Oct 08 07:03:36 crc kubenswrapper[4958]: I1008 07:03:36.271619 4958 scope.go:117] "RemoveContainer" containerID="ec884386fa6ec452d56d5f07cfa33c6ee7fb4c007780b98c88a134f47c7007f6" Oct 08 07:03:37 crc kubenswrapper[4958]: I1008 07:03:37.342923 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:37 crc kubenswrapper[4958]: I1008 07:03:37.343063 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:38 crc kubenswrapper[4958]: I1008 07:03:38.418373 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lnmb" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="registry-server" probeResult="failure" output=< Oct 08 07:03:38 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 07:03:38 crc kubenswrapper[4958]: > Oct 08 07:03:38 crc kubenswrapper[4958]: I1008 07:03:38.577498 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:03:38 crc kubenswrapper[4958]: E1008 07:03:38.578064 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:03:47 crc kubenswrapper[4958]: I1008 07:03:47.409412 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:47 crc kubenswrapper[4958]: I1008 07:03:47.470077 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:47 crc kubenswrapper[4958]: I1008 07:03:47.664616 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lnmb"] Oct 08 07:03:48 crc kubenswrapper[4958]: I1008 07:03:48.837346 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lnmb" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="registry-server" containerID="cri-o://f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535" gracePeriod=2 Oct 08 07:03:49 crc kubenswrapper[4958]: E1008 07:03:49.084857 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65ce3a98_bd91_4c28_a36d_7e29c0f8257b.slice/crio-conmon-f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535.scope\": RecentStats: unable to find data in memory cache]" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.324444 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.410089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq55j\" (UniqueName: \"kubernetes.io/projected/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-kube-api-access-sq55j\") pod \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.410194 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-catalog-content\") pod \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.410236 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-utilities\") pod \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\" (UID: \"65ce3a98-bd91-4c28-a36d-7e29c0f8257b\") " Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.411162 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-utilities" (OuterVolumeSpecName: "utilities") pod "65ce3a98-bd91-4c28-a36d-7e29c0f8257b" (UID: "65ce3a98-bd91-4c28-a36d-7e29c0f8257b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.426188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-kube-api-access-sq55j" (OuterVolumeSpecName: "kube-api-access-sq55j") pod "65ce3a98-bd91-4c28-a36d-7e29c0f8257b" (UID: "65ce3a98-bd91-4c28-a36d-7e29c0f8257b"). InnerVolumeSpecName "kube-api-access-sq55j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.512499 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq55j\" (UniqueName: \"kubernetes.io/projected/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-kube-api-access-sq55j\") on node \"crc\" DevicePath \"\"" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.512555 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.517860 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65ce3a98-bd91-4c28-a36d-7e29c0f8257b" (UID: "65ce3a98-bd91-4c28-a36d-7e29c0f8257b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.615112 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ce3a98-bd91-4c28-a36d-7e29c0f8257b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.848736 4958 generic.go:334] "Generic (PLEG): container finished" podID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerID="f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535" exitCode=0 Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.848788 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lnmb" event={"ID":"65ce3a98-bd91-4c28-a36d-7e29c0f8257b","Type":"ContainerDied","Data":"f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535"} Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.848818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lnmb" event={"ID":"65ce3a98-bd91-4c28-a36d-7e29c0f8257b","Type":"ContainerDied","Data":"8dc2c60de0ad49156dd24fc635ef0025cb90c42790e9ebb839a5fbe89062db07"} Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.848841 4958 scope.go:117] "RemoveContainer" containerID="f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.848984 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lnmb" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.874545 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lnmb"] Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.879522 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lnmb"] Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.888545 4958 scope.go:117] "RemoveContainer" containerID="f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.914237 4958 scope.go:117] "RemoveContainer" containerID="26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.954341 4958 scope.go:117] "RemoveContainer" containerID="f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535" Oct 08 07:03:49 crc kubenswrapper[4958]: E1008 07:03:49.955062 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535\": container with ID starting with f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535 not found: ID does not exist" containerID="f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.955237 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535"} err="failed to get container status \"f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535\": rpc error: code = NotFound desc = could not find container \"f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535\": container with ID starting with f8f60531c4a8958b324ff5354128b7a999577f9cb5c9c8b165d8483a02b7c535 not found: ID does not exist" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.955377 4958 scope.go:117] "RemoveContainer" containerID="f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34" Oct 08 07:03:49 crc kubenswrapper[4958]: E1008 07:03:49.956057 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34\": container with ID starting with f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34 not found: ID does not exist" containerID="f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.956198 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34"} err="failed to get container status \"f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34\": rpc error: code = NotFound desc = could not find container \"f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34\": container with ID starting with f01d4696cbb437aab1e51850585e6852fb57b70200578fb2d072c1bc4c781a34 not found: ID does not exist" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.956330 4958 scope.go:117] "RemoveContainer" containerID="26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49" Oct 08 07:03:49 crc kubenswrapper[4958]: E1008 07:03:49.956889 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49\": container with ID starting with 26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49 not found: ID does not exist" containerID="26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49" Oct 08 07:03:49 crc kubenswrapper[4958]: I1008 07:03:49.957339 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49"} err="failed to get container status \"26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49\": rpc error: code = NotFound desc = could not find container \"26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49\": container with ID starting with 26ae3f1698d65e7f0c6b8012283e0de748835d07cb538d421d4dfcf6d406eb49 not found: ID does not exist" Oct 08 07:03:51 crc kubenswrapper[4958]: I1008 07:03:51.592545 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" path="/var/lib/kubelet/pods/65ce3a98-bd91-4c28-a36d-7e29c0f8257b/volumes" Oct 08 07:03:52 crc kubenswrapper[4958]: I1008 07:03:52.576853 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:03:52 crc kubenswrapper[4958]: E1008 07:03:52.577729 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:04:05 crc kubenswrapper[4958]: I1008 07:04:05.577605 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:04:05 crc kubenswrapper[4958]: E1008 07:04:05.578634 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:04:17 crc kubenswrapper[4958]: I1008 07:04:17.585568 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:04:17 crc kubenswrapper[4958]: E1008 07:04:17.586402 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:04:28 crc kubenswrapper[4958]: I1008 07:04:28.577258 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:04:28 crc kubenswrapper[4958]: E1008 07:04:28.578354 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:04:43 crc kubenswrapper[4958]: I1008 07:04:43.576892 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:04:43 crc kubenswrapper[4958]: E1008 07:04:43.578094 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:04:56 crc kubenswrapper[4958]: I1008 07:04:56.577460 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:04:56 crc kubenswrapper[4958]: E1008 07:04:56.578505 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:05:09 crc kubenswrapper[4958]: I1008 07:05:09.576621 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:05:09 crc kubenswrapper[4958]: E1008 07:05:09.577735 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:05:23 crc kubenswrapper[4958]: I1008 07:05:23.580447 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:05:23 crc kubenswrapper[4958]: E1008 07:05:23.581523 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:05:34 crc kubenswrapper[4958]: I1008 07:05:34.577056 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:05:34 crc kubenswrapper[4958]: E1008 07:05:34.578176 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:05:45 crc kubenswrapper[4958]: I1008 07:05:45.577574 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:05:45 crc kubenswrapper[4958]: E1008 07:05:45.578855 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:05:56 crc kubenswrapper[4958]: I1008 07:05:56.578237 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:05:56 crc kubenswrapper[4958]: E1008 07:05:56.579348 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:06:11 crc kubenswrapper[4958]: I1008 07:06:11.577166 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:06:11 crc kubenswrapper[4958]: E1008 07:06:11.578202 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:06:26 crc kubenswrapper[4958]: I1008 07:06:26.577190 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:06:26 crc kubenswrapper[4958]: E1008 07:06:26.578222 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:06:41 crc kubenswrapper[4958]: I1008 07:06:41.576763 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:06:41 crc kubenswrapper[4958]: E1008 07:06:41.577614 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:06:52 crc kubenswrapper[4958]: I1008 07:06:52.577752 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:06:52 crc kubenswrapper[4958]: E1008 07:06:52.579005 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:07:07 crc kubenswrapper[4958]: I1008 07:07:07.587033 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:07:07 crc kubenswrapper[4958]: E1008 07:07:07.588012 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:07:18 crc kubenswrapper[4958]: I1008 07:07:18.577260 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:07:18 crc kubenswrapper[4958]: E1008 07:07:18.578518 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:07:32 crc kubenswrapper[4958]: I1008 07:07:32.577371 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:07:32 crc kubenswrapper[4958]: E1008 07:07:32.578427 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:07:44 crc kubenswrapper[4958]: I1008 07:07:44.576902 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:07:45 crc kubenswrapper[4958]: I1008 07:07:45.142276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"96102950f6f3d1202963c0671887253ad2deb50e70824ed9c9463cb6977fafab"} Oct 08 07:08:58 crc kubenswrapper[4958]: I1008 07:08:58.991273 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6p48c"] Oct 08 07:08:58 crc kubenswrapper[4958]: E1008 07:08:58.992715 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="registry-server" Oct 08 07:08:58 crc kubenswrapper[4958]: I1008 07:08:58.992749 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="registry-server" Oct 08 07:08:58 crc kubenswrapper[4958]: E1008 07:08:58.992783 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="extract-utilities" Oct 08 07:08:58 crc kubenswrapper[4958]: I1008 07:08:58.992800 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="extract-utilities" Oct 08 07:08:58 crc kubenswrapper[4958]: E1008 07:08:58.992840 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="extract-content" Oct 08 07:08:58 crc kubenswrapper[4958]: I1008 07:08:58.992857 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="extract-content" Oct 08 07:08:58 crc kubenswrapper[4958]: I1008 07:08:58.993247 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ce3a98-bd91-4c28-a36d-7e29c0f8257b" containerName="registry-server" Oct 08 07:08:58 crc kubenswrapper[4958]: I1008 07:08:58.997241 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.020381 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6p48c"] Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.184629 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp86\" (UniqueName: \"kubernetes.io/projected/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-kube-api-access-nzp86\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.184707 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-catalog-content\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.184768 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-utilities\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.286023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-utilities\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.286202 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzp86\" (UniqueName: \"kubernetes.io/projected/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-kube-api-access-nzp86\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.286270 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-catalog-content\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.286788 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-catalog-content\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.286894 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-utilities\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.313264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzp86\" (UniqueName: \"kubernetes.io/projected/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-kube-api-access-nzp86\") pod \"certified-operators-6p48c\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.326602 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.660458 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6p48c"] Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.886682 4958 generic.go:334] "Generic (PLEG): container finished" podID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerID="301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c" exitCode=0 Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.886789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerDied","Data":"301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c"} Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.887117 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerStarted","Data":"7da6e3c7cc7fb8632f4191eeb96a88a845f4f58423c107f7efbb43c76a9ddeb1"} Oct 08 07:08:59 crc kubenswrapper[4958]: I1008 07:08:59.888203 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:09:00 crc kubenswrapper[4958]: I1008 07:09:00.900206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerStarted","Data":"3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1"} Oct 08 07:09:01 crc kubenswrapper[4958]: I1008 07:09:01.912642 4958 generic.go:334] "Generic (PLEG): container finished" podID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerID="3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1" exitCode=0 Oct 08 07:09:01 crc kubenswrapper[4958]: I1008 07:09:01.912710 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerDied","Data":"3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1"} Oct 08 07:09:02 crc kubenswrapper[4958]: I1008 07:09:02.928530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerStarted","Data":"32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a"} Oct 08 07:09:02 crc kubenswrapper[4958]: I1008 07:09:02.955865 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6p48c" podStartSLOduration=2.423368715 podStartE2EDuration="4.955843402s" podCreationTimestamp="2025-10-08 07:08:58 +0000 UTC" firstStartedPulling="2025-10-08 07:08:59.887934066 +0000 UTC m=+2083.017626667" lastFinishedPulling="2025-10-08 07:09:02.420408753 +0000 UTC m=+2085.550101354" observedRunningTime="2025-10-08 07:09:02.95353758 +0000 UTC m=+2086.083230211" watchObservedRunningTime="2025-10-08 07:09:02.955843402 +0000 UTC m=+2086.085536013" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.464737 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sghnn"] Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.467736 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.495535 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sghnn"] Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.640912 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-catalog-content\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.641160 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5rjc\" (UniqueName: \"kubernetes.io/projected/f9b73bea-2feb-4545-8386-e35295255383-kube-api-access-z5rjc\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.641261 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-utilities\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.742390 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-catalog-content\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.742902 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-catalog-content\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.744217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5rjc\" (UniqueName: \"kubernetes.io/projected/f9b73bea-2feb-4545-8386-e35295255383-kube-api-access-z5rjc\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.744291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-utilities\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.744752 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-utilities\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.783418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5rjc\" (UniqueName: \"kubernetes.io/projected/f9b73bea-2feb-4545-8386-e35295255383-kube-api-access-z5rjc\") pod \"community-operators-sghnn\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:08 crc kubenswrapper[4958]: I1008 07:09:08.792132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.327518 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.329196 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.393226 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sghnn"] Oct 08 07:09:09 crc kubenswrapper[4958]: W1008 07:09:09.401883 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b73bea_2feb_4545_8386_e35295255383.slice/crio-5350bac1558bb2e98cc33c5e875227f82d54f611f33e0c8c19d0f8c6fbe81149 WatchSource:0}: Error finding container 5350bac1558bb2e98cc33c5e875227f82d54f611f33e0c8c19d0f8c6fbe81149: Status 404 returned error can't find the container with id 5350bac1558bb2e98cc33c5e875227f82d54f611f33e0c8c19d0f8c6fbe81149 Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.414667 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.993762 4958 generic.go:334] "Generic (PLEG): container finished" podID="f9b73bea-2feb-4545-8386-e35295255383" containerID="683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d" exitCode=0 Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.993896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerDied","Data":"683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d"} Oct 08 07:09:09 crc kubenswrapper[4958]: I1008 07:09:09.994397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerStarted","Data":"5350bac1558bb2e98cc33c5e875227f82d54f611f33e0c8c19d0f8c6fbe81149"} Oct 08 07:09:10 crc kubenswrapper[4958]: I1008 07:09:10.084034 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:09:11 crc kubenswrapper[4958]: I1008 07:09:11.006490 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerStarted","Data":"cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842"} Oct 08 07:09:11 crc kubenswrapper[4958]: I1008 07:09:11.837825 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6p48c"] Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.021325 4958 generic.go:334] "Generic (PLEG): container finished" podID="f9b73bea-2feb-4545-8386-e35295255383" containerID="cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842" exitCode=0 Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.021917 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6p48c" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="registry-server" containerID="cri-o://32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a" gracePeriod=2 Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.024941 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerDied","Data":"cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842"} Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.392814 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.501906 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzp86\" (UniqueName: \"kubernetes.io/projected/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-kube-api-access-nzp86\") pod \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.502044 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-utilities\") pod \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.502129 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-catalog-content\") pod \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\" (UID: \"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca\") " Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.503614 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-utilities" (OuterVolumeSpecName: "utilities") pod "dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" (UID: "dde2f1bf-d12f-4a1d-89a7-adb215ed3eca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.514605 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-kube-api-access-nzp86" (OuterVolumeSpecName: "kube-api-access-nzp86") pod "dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" (UID: "dde2f1bf-d12f-4a1d-89a7-adb215ed3eca"). InnerVolumeSpecName "kube-api-access-nzp86". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.576892 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" (UID: "dde2f1bf-d12f-4a1d-89a7-adb215ed3eca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.606616 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.606667 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:09:12 crc kubenswrapper[4958]: I1008 07:09:12.606691 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzp86\" (UniqueName: \"kubernetes.io/projected/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca-kube-api-access-nzp86\") on node \"crc\" DevicePath \"\"" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.035659 4958 generic.go:334] "Generic (PLEG): container finished" podID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerID="32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a" exitCode=0 Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.035775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerDied","Data":"32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a"} Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.035819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p48c" event={"ID":"dde2f1bf-d12f-4a1d-89a7-adb215ed3eca","Type":"ContainerDied","Data":"7da6e3c7cc7fb8632f4191eeb96a88a845f4f58423c107f7efbb43c76a9ddeb1"} Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.035846 4958 scope.go:117] "RemoveContainer" containerID="32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.035813 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p48c" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.040402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerStarted","Data":"ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5"} Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.061315 4958 scope.go:117] "RemoveContainer" containerID="3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.075147 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sghnn" podStartSLOduration=2.599820794 podStartE2EDuration="5.075120118s" podCreationTimestamp="2025-10-08 07:09:08 +0000 UTC" firstStartedPulling="2025-10-08 07:09:09.997764606 +0000 UTC m=+2093.127457237" lastFinishedPulling="2025-10-08 07:09:12.47306395 +0000 UTC m=+2095.602756561" observedRunningTime="2025-10-08 07:09:13.070608946 +0000 UTC m=+2096.200301557" watchObservedRunningTime="2025-10-08 07:09:13.075120118 +0000 UTC m=+2096.204812749" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.094347 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6p48c"] Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.101922 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6p48c"] Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.113562 4958 scope.go:117] "RemoveContainer" containerID="301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.141115 4958 scope.go:117] "RemoveContainer" containerID="32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a" Oct 08 07:09:13 crc kubenswrapper[4958]: E1008 07:09:13.141634 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a\": container with ID starting with 32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a not found: ID does not exist" containerID="32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.141666 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a"} err="failed to get container status \"32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a\": rpc error: code = NotFound desc = could not find container \"32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a\": container with ID starting with 32cd850410402ebf76d1d74f89cbbd74abfc47b45d6eb795382229667870330a not found: ID does not exist" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.141688 4958 scope.go:117] "RemoveContainer" containerID="3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1" Oct 08 07:09:13 crc kubenswrapper[4958]: E1008 07:09:13.142207 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1\": container with ID starting with 3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1 not found: ID does not exist" containerID="3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.142228 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1"} err="failed to get container status \"3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1\": rpc error: code = NotFound desc = could not find container \"3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1\": container with ID starting with 3dd184055db8c3f70395dd427297740837665977b4bb1e57f0477cae29f4d0f1 not found: ID does not exist" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.142239 4958 scope.go:117] "RemoveContainer" containerID="301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c" Oct 08 07:09:13 crc kubenswrapper[4958]: E1008 07:09:13.142592 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c\": container with ID starting with 301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c not found: ID does not exist" containerID="301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.142612 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c"} err="failed to get container status \"301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c\": rpc error: code = NotFound desc = could not find container \"301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c\": container with ID starting with 301359c7378b4d1bb851efb98e797bbd569586f523fc935ed62ca2f89ca76d7c not found: ID does not exist" Oct 08 07:09:13 crc kubenswrapper[4958]: I1008 07:09:13.588813 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" path="/var/lib/kubelet/pods/dde2f1bf-d12f-4a1d-89a7-adb215ed3eca/volumes" Oct 08 07:09:18 crc kubenswrapper[4958]: I1008 07:09:18.792773 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:18 crc kubenswrapper[4958]: I1008 07:09:18.793658 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:18 crc kubenswrapper[4958]: I1008 07:09:18.865057 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:19 crc kubenswrapper[4958]: I1008 07:09:19.149358 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:21 crc kubenswrapper[4958]: I1008 07:09:21.920728 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sghnn"] Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.120081 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sghnn" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="registry-server" containerID="cri-o://ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5" gracePeriod=2 Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.634870 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.771406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-catalog-content\") pod \"f9b73bea-2feb-4545-8386-e35295255383\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.771468 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rjc\" (UniqueName: \"kubernetes.io/projected/f9b73bea-2feb-4545-8386-e35295255383-kube-api-access-z5rjc\") pod \"f9b73bea-2feb-4545-8386-e35295255383\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.771681 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-utilities\") pod \"f9b73bea-2feb-4545-8386-e35295255383\" (UID: \"f9b73bea-2feb-4545-8386-e35295255383\") " Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.772691 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-utilities" (OuterVolumeSpecName: "utilities") pod "f9b73bea-2feb-4545-8386-e35295255383" (UID: "f9b73bea-2feb-4545-8386-e35295255383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.777833 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b73bea-2feb-4545-8386-e35295255383-kube-api-access-z5rjc" (OuterVolumeSpecName: "kube-api-access-z5rjc") pod "f9b73bea-2feb-4545-8386-e35295255383" (UID: "f9b73bea-2feb-4545-8386-e35295255383"). InnerVolumeSpecName "kube-api-access-z5rjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.829402 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9b73bea-2feb-4545-8386-e35295255383" (UID: "f9b73bea-2feb-4545-8386-e35295255383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.873902 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.874023 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b73bea-2feb-4545-8386-e35295255383-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:09:22 crc kubenswrapper[4958]: I1008 07:09:22.874048 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5rjc\" (UniqueName: \"kubernetes.io/projected/f9b73bea-2feb-4545-8386-e35295255383-kube-api-access-z5rjc\") on node \"crc\" DevicePath \"\"" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.132493 4958 generic.go:334] "Generic (PLEG): container finished" podID="f9b73bea-2feb-4545-8386-e35295255383" containerID="ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5" exitCode=0 Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.132560 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerDied","Data":"ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5"} Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.132613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sghnn" event={"ID":"f9b73bea-2feb-4545-8386-e35295255383","Type":"ContainerDied","Data":"5350bac1558bb2e98cc33c5e875227f82d54f611f33e0c8c19d0f8c6fbe81149"} Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.132612 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sghnn" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.132638 4958 scope.go:117] "RemoveContainer" containerID="ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.166893 4958 scope.go:117] "RemoveContainer" containerID="cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.190014 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sghnn"] Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.201805 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sghnn"] Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.212557 4958 scope.go:117] "RemoveContainer" containerID="683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.236409 4958 scope.go:117] "RemoveContainer" containerID="ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5" Oct 08 07:09:23 crc kubenswrapper[4958]: E1008 07:09:23.244107 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5\": container with ID starting with ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5 not found: ID does not exist" containerID="ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.244161 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5"} err="failed to get container status \"ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5\": rpc error: code = NotFound desc = could not find container \"ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5\": container with ID starting with ea1fd33bf49fd3f08db1bf705d39927fab29a448d7c23704d48fe0125be0daf5 not found: ID does not exist" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.244214 4958 scope.go:117] "RemoveContainer" containerID="cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842" Oct 08 07:09:23 crc kubenswrapper[4958]: E1008 07:09:23.244730 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842\": container with ID starting with cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842 not found: ID does not exist" containerID="cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.244792 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842"} err="failed to get container status \"cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842\": rpc error: code = NotFound desc = could not find container \"cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842\": container with ID starting with cbab40b29490304194b5009eb8d003051cab826007443ca0e10b2f659c116842 not found: ID does not exist" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.244837 4958 scope.go:117] "RemoveContainer" containerID="683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d" Oct 08 07:09:23 crc kubenswrapper[4958]: E1008 07:09:23.245197 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d\": container with ID starting with 683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d not found: ID does not exist" containerID="683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.245235 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d"} err="failed to get container status \"683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d\": rpc error: code = NotFound desc = could not find container \"683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d\": container with ID starting with 683bb326017585515fd94d8f98847e788439cd7168adf4ec863938f4c717aa7d not found: ID does not exist" Oct 08 07:09:23 crc kubenswrapper[4958]: I1008 07:09:23.592608 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b73bea-2feb-4545-8386-e35295255383" path="/var/lib/kubelet/pods/f9b73bea-2feb-4545-8386-e35295255383/volumes" Oct 08 07:10:06 crc kubenswrapper[4958]: I1008 07:10:06.845290 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:10:06 crc kubenswrapper[4958]: I1008 07:10:06.846071 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:10:36 crc kubenswrapper[4958]: I1008 07:10:36.845381 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:10:36 crc kubenswrapper[4958]: I1008 07:10:36.847278 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:11:06 crc kubenswrapper[4958]: I1008 07:11:06.845593 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:11:06 crc kubenswrapper[4958]: I1008 07:11:06.846312 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:11:06 crc kubenswrapper[4958]: I1008 07:11:06.846380 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:11:06 crc kubenswrapper[4958]: I1008 07:11:06.847382 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96102950f6f3d1202963c0671887253ad2deb50e70824ed9c9463cb6977fafab"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:11:06 crc kubenswrapper[4958]: I1008 07:11:06.847484 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://96102950f6f3d1202963c0671887253ad2deb50e70824ed9c9463cb6977fafab" gracePeriod=600 Oct 08 07:11:07 crc kubenswrapper[4958]: I1008 07:11:07.156580 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="96102950f6f3d1202963c0671887253ad2deb50e70824ed9c9463cb6977fafab" exitCode=0 Oct 08 07:11:07 crc kubenswrapper[4958]: I1008 07:11:07.156664 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"96102950f6f3d1202963c0671887253ad2deb50e70824ed9c9463cb6977fafab"} Oct 08 07:11:07 crc kubenswrapper[4958]: I1008 07:11:07.156998 4958 scope.go:117] "RemoveContainer" containerID="7537a9fab5fb35ce6183e8b24d8a934950dde23f7162b2cc570f190538727137" Oct 08 07:11:08 crc kubenswrapper[4958]: I1008 07:11:08.170730 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575"} Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.808454 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fww4w"] Oct 08 07:11:50 crc kubenswrapper[4958]: E1008 07:11:50.810561 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="extract-utilities" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810582 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="extract-utilities" Oct 08 07:11:50 crc kubenswrapper[4958]: E1008 07:11:50.810598 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="extract-content" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810608 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="extract-content" Oct 08 07:11:50 crc kubenswrapper[4958]: E1008 07:11:50.810628 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="registry-server" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810639 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="registry-server" Oct 08 07:11:50 crc kubenswrapper[4958]: E1008 07:11:50.810658 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="registry-server" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810667 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="registry-server" Oct 08 07:11:50 crc kubenswrapper[4958]: E1008 07:11:50.810697 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="extract-utilities" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810707 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="extract-utilities" Oct 08 07:11:50 crc kubenswrapper[4958]: E1008 07:11:50.810727 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="extract-content" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810738 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="extract-content" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.810970 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde2f1bf-d12f-4a1d-89a7-adb215ed3eca" containerName="registry-server" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.811004 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b73bea-2feb-4545-8386-e35295255383" containerName="registry-server" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.812625 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.823503 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fww4w"] Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.930886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-utilities\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.931439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-catalog-content\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:50 crc kubenswrapper[4958]: I1008 07:11:50.931495 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx6cx\" (UniqueName: \"kubernetes.io/projected/568e4933-e768-49d9-857e-9e51e13f9819-kube-api-access-lx6cx\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.033442 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-utilities\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.033634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-catalog-content\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.033689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx6cx\" (UniqueName: \"kubernetes.io/projected/568e4933-e768-49d9-857e-9e51e13f9819-kube-api-access-lx6cx\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.034107 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-utilities\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.034157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-catalog-content\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.065991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx6cx\" (UniqueName: \"kubernetes.io/projected/568e4933-e768-49d9-857e-9e51e13f9819-kube-api-access-lx6cx\") pod \"redhat-marketplace-fww4w\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.146348 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:11:51 crc kubenswrapper[4958]: I1008 07:11:51.589382 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fww4w"] Oct 08 07:11:52 crc kubenswrapper[4958]: I1008 07:11:52.599244 4958 generic.go:334] "Generic (PLEG): container finished" podID="568e4933-e768-49d9-857e-9e51e13f9819" containerID="161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f" exitCode=0 Oct 08 07:11:52 crc kubenswrapper[4958]: I1008 07:11:52.599313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fww4w" event={"ID":"568e4933-e768-49d9-857e-9e51e13f9819","Type":"ContainerDied","Data":"161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f"} Oct 08 07:11:52 crc kubenswrapper[4958]: I1008 07:11:52.599373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fww4w" event={"ID":"568e4933-e768-49d9-857e-9e51e13f9819","Type":"ContainerStarted","Data":"65d177e5ab3a901d1c24965c547ab501f47cbce85bb7a01c69ad7aa2fa3f5ce0"} Oct 08 07:11:54 crc kubenswrapper[4958]: I1008 07:11:54.620483 4958 generic.go:334] "Generic (PLEG): container finished" podID="568e4933-e768-49d9-857e-9e51e13f9819" containerID="5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea" exitCode=0 Oct 08 07:11:54 crc kubenswrapper[4958]: I1008 07:11:54.620635 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fww4w" event={"ID":"568e4933-e768-49d9-857e-9e51e13f9819","Type":"ContainerDied","Data":"5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea"} Oct 08 07:11:55 crc kubenswrapper[4958]: I1008 07:11:55.636124 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fww4w" event={"ID":"568e4933-e768-49d9-857e-9e51e13f9819","Type":"ContainerStarted","Data":"caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3"} Oct 08 07:11:55 crc kubenswrapper[4958]: I1008 07:11:55.667016 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fww4w" podStartSLOduration=3.008448604 podStartE2EDuration="5.666989012s" podCreationTimestamp="2025-10-08 07:11:50 +0000 UTC" firstStartedPulling="2025-10-08 07:11:52.602719154 +0000 UTC m=+2255.732411795" lastFinishedPulling="2025-10-08 07:11:55.261259562 +0000 UTC m=+2258.390952203" observedRunningTime="2025-10-08 07:11:55.660349243 +0000 UTC m=+2258.790041854" watchObservedRunningTime="2025-10-08 07:11:55.666989012 +0000 UTC m=+2258.796681653" Oct 08 07:12:01 crc kubenswrapper[4958]: I1008 07:12:01.147606 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:12:01 crc kubenswrapper[4958]: I1008 07:12:01.148434 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:12:01 crc kubenswrapper[4958]: I1008 07:12:01.237199 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:12:01 crc kubenswrapper[4958]: I1008 07:12:01.777376 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:12:01 crc kubenswrapper[4958]: I1008 07:12:01.852651 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fww4w"] Oct 08 07:12:03 crc kubenswrapper[4958]: I1008 07:12:03.717114 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fww4w" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="registry-server" containerID="cri-o://caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3" gracePeriod=2 Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.199754 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.264152 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-utilities\") pod \"568e4933-e768-49d9-857e-9e51e13f9819\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.264245 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-catalog-content\") pod \"568e4933-e768-49d9-857e-9e51e13f9819\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.264286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx6cx\" (UniqueName: \"kubernetes.io/projected/568e4933-e768-49d9-857e-9e51e13f9819-kube-api-access-lx6cx\") pod \"568e4933-e768-49d9-857e-9e51e13f9819\" (UID: \"568e4933-e768-49d9-857e-9e51e13f9819\") " Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.266043 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-utilities" (OuterVolumeSpecName: "utilities") pod "568e4933-e768-49d9-857e-9e51e13f9819" (UID: "568e4933-e768-49d9-857e-9e51e13f9819"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.270939 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568e4933-e768-49d9-857e-9e51e13f9819-kube-api-access-lx6cx" (OuterVolumeSpecName: "kube-api-access-lx6cx") pod "568e4933-e768-49d9-857e-9e51e13f9819" (UID: "568e4933-e768-49d9-857e-9e51e13f9819"). InnerVolumeSpecName "kube-api-access-lx6cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.286838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "568e4933-e768-49d9-857e-9e51e13f9819" (UID: "568e4933-e768-49d9-857e-9e51e13f9819"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.365985 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.366024 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/568e4933-e768-49d9-857e-9e51e13f9819-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.366039 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx6cx\" (UniqueName: \"kubernetes.io/projected/568e4933-e768-49d9-857e-9e51e13f9819-kube-api-access-lx6cx\") on node \"crc\" DevicePath \"\"" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.729578 4958 generic.go:334] "Generic (PLEG): container finished" podID="568e4933-e768-49d9-857e-9e51e13f9819" containerID="caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3" exitCode=0 Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.729638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fww4w" event={"ID":"568e4933-e768-49d9-857e-9e51e13f9819","Type":"ContainerDied","Data":"caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3"} Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.729681 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fww4w" event={"ID":"568e4933-e768-49d9-857e-9e51e13f9819","Type":"ContainerDied","Data":"65d177e5ab3a901d1c24965c547ab501f47cbce85bb7a01c69ad7aa2fa3f5ce0"} Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.729713 4958 scope.go:117] "RemoveContainer" containerID="caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.729643 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fww4w" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.760596 4958 scope.go:117] "RemoveContainer" containerID="5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.780781 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fww4w"] Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.787909 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fww4w"] Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.809462 4958 scope.go:117] "RemoveContainer" containerID="161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.846012 4958 scope.go:117] "RemoveContainer" containerID="caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3" Oct 08 07:12:04 crc kubenswrapper[4958]: E1008 07:12:04.846871 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3\": container with ID starting with caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3 not found: ID does not exist" containerID="caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.846935 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3"} err="failed to get container status \"caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3\": rpc error: code = NotFound desc = could not find container \"caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3\": container with ID starting with caf370660b19a5a1f4528a41dfdd6168da91ffea02295c69680dc76895291dd3 not found: ID does not exist" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.847296 4958 scope.go:117] "RemoveContainer" containerID="5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea" Oct 08 07:12:04 crc kubenswrapper[4958]: E1008 07:12:04.847867 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea\": container with ID starting with 5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea not found: ID does not exist" containerID="5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.847936 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea"} err="failed to get container status \"5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea\": rpc error: code = NotFound desc = could not find container \"5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea\": container with ID starting with 5d40f0d32834d84694e5249ed8285dfa9f420a17e3930b5496473816e5f35bea not found: ID does not exist" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.848033 4958 scope.go:117] "RemoveContainer" containerID="161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f" Oct 08 07:12:04 crc kubenswrapper[4958]: E1008 07:12:04.848493 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f\": container with ID starting with 161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f not found: ID does not exist" containerID="161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f" Oct 08 07:12:04 crc kubenswrapper[4958]: I1008 07:12:04.848535 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f"} err="failed to get container status \"161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f\": rpc error: code = NotFound desc = could not find container \"161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f\": container with ID starting with 161f4a5fbd75fcfbb45183eac1880bc4bbb0fd4a165a1988d874e90fb2276f3f not found: ID does not exist" Oct 08 07:12:05 crc kubenswrapper[4958]: I1008 07:12:05.591446 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568e4933-e768-49d9-857e-9e51e13f9819" path="/var/lib/kubelet/pods/568e4933-e768-49d9-857e-9e51e13f9819/volumes" Oct 08 07:13:36 crc kubenswrapper[4958]: I1008 07:13:36.845174 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:13:36 crc kubenswrapper[4958]: I1008 07:13:36.846132 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:14:06 crc kubenswrapper[4958]: I1008 07:14:06.845589 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:14:06 crc kubenswrapper[4958]: I1008 07:14:06.846443 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:14:36 crc kubenswrapper[4958]: I1008 07:14:36.845708 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:14:36 crc kubenswrapper[4958]: I1008 07:14:36.846483 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:14:36 crc kubenswrapper[4958]: I1008 07:14:36.846569 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:14:36 crc kubenswrapper[4958]: I1008 07:14:36.847901 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:14:36 crc kubenswrapper[4958]: I1008 07:14:36.848050 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" gracePeriod=600 Oct 08 07:14:36 crc kubenswrapper[4958]: E1008 07:14:36.977789 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:14:37 crc kubenswrapper[4958]: I1008 07:14:37.170084 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" exitCode=0 Oct 08 07:14:37 crc kubenswrapper[4958]: I1008 07:14:37.170186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575"} Oct 08 07:14:37 crc kubenswrapper[4958]: I1008 07:14:37.170554 4958 scope.go:117] "RemoveContainer" containerID="96102950f6f3d1202963c0671887253ad2deb50e70824ed9c9463cb6977fafab" Oct 08 07:14:37 crc kubenswrapper[4958]: I1008 07:14:37.171309 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:14:37 crc kubenswrapper[4958]: E1008 07:14:37.171914 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:14:47 crc kubenswrapper[4958]: I1008 07:14:47.584520 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:14:47 crc kubenswrapper[4958]: E1008 07:14:47.586426 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.159862 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n"] Oct 08 07:15:00 crc kubenswrapper[4958]: E1008 07:15:00.160806 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="extract-content" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.160822 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="extract-content" Oct 08 07:15:00 crc kubenswrapper[4958]: E1008 07:15:00.160846 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="registry-server" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.160855 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="registry-server" Oct 08 07:15:00 crc kubenswrapper[4958]: E1008 07:15:00.160874 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="extract-utilities" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.160887 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="extract-utilities" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.161357 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="568e4933-e768-49d9-857e-9e51e13f9819" containerName="registry-server" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.162002 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.164152 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.169260 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.172130 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n"] Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.252929 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb50bf43-0d69-496f-b896-5d72a9c84664-secret-volume\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.253057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb50bf43-0d69-496f-b896-5d72a9c84664-config-volume\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.253099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5n72\" (UniqueName: \"kubernetes.io/projected/eb50bf43-0d69-496f-b896-5d72a9c84664-kube-api-access-b5n72\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.354069 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5n72\" (UniqueName: \"kubernetes.io/projected/eb50bf43-0d69-496f-b896-5d72a9c84664-kube-api-access-b5n72\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.354247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb50bf43-0d69-496f-b896-5d72a9c84664-secret-volume\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.354349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb50bf43-0d69-496f-b896-5d72a9c84664-config-volume\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.356108 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb50bf43-0d69-496f-b896-5d72a9c84664-config-volume\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.368151 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb50bf43-0d69-496f-b896-5d72a9c84664-secret-volume\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.384452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5n72\" (UniqueName: \"kubernetes.io/projected/eb50bf43-0d69-496f-b896-5d72a9c84664-kube-api-access-b5n72\") pod \"collect-profiles-29331795-tpf4n\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.499859 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.577223 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:15:00 crc kubenswrapper[4958]: E1008 07:15:00.577690 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:15:00 crc kubenswrapper[4958]: I1008 07:15:00.960039 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n"] Oct 08 07:15:00 crc kubenswrapper[4958]: W1008 07:15:00.961033 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb50bf43_0d69_496f_b896_5d72a9c84664.slice/crio-508d216447a96aacf1bcb9568336eca4bf35bc246d7adb63fa45e2a6c8b231d7 WatchSource:0}: Error finding container 508d216447a96aacf1bcb9568336eca4bf35bc246d7adb63fa45e2a6c8b231d7: Status 404 returned error can't find the container with id 508d216447a96aacf1bcb9568336eca4bf35bc246d7adb63fa45e2a6c8b231d7 Oct 08 07:15:01 crc kubenswrapper[4958]: I1008 07:15:01.400292 4958 generic.go:334] "Generic (PLEG): container finished" podID="eb50bf43-0d69-496f-b896-5d72a9c84664" containerID="7929edf725af568b1918d3547517e53924ac4da82754060a0cefdd275c0152d5" exitCode=0 Oct 08 07:15:01 crc kubenswrapper[4958]: I1008 07:15:01.400465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" event={"ID":"eb50bf43-0d69-496f-b896-5d72a9c84664","Type":"ContainerDied","Data":"7929edf725af568b1918d3547517e53924ac4da82754060a0cefdd275c0152d5"} Oct 08 07:15:01 crc kubenswrapper[4958]: I1008 07:15:01.400576 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" event={"ID":"eb50bf43-0d69-496f-b896-5d72a9c84664","Type":"ContainerStarted","Data":"508d216447a96aacf1bcb9568336eca4bf35bc246d7adb63fa45e2a6c8b231d7"} Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.708532 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.790624 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb50bf43-0d69-496f-b896-5d72a9c84664-config-volume\") pod \"eb50bf43-0d69-496f-b896-5d72a9c84664\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.790985 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5n72\" (UniqueName: \"kubernetes.io/projected/eb50bf43-0d69-496f-b896-5d72a9c84664-kube-api-access-b5n72\") pod \"eb50bf43-0d69-496f-b896-5d72a9c84664\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.791083 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb50bf43-0d69-496f-b896-5d72a9c84664-secret-volume\") pod \"eb50bf43-0d69-496f-b896-5d72a9c84664\" (UID: \"eb50bf43-0d69-496f-b896-5d72a9c84664\") " Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.791454 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb50bf43-0d69-496f-b896-5d72a9c84664-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb50bf43-0d69-496f-b896-5d72a9c84664" (UID: "eb50bf43-0d69-496f-b896-5d72a9c84664"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.797104 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb50bf43-0d69-496f-b896-5d72a9c84664-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb50bf43-0d69-496f-b896-5d72a9c84664" (UID: "eb50bf43-0d69-496f-b896-5d72a9c84664"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.798598 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb50bf43-0d69-496f-b896-5d72a9c84664-kube-api-access-b5n72" (OuterVolumeSpecName: "kube-api-access-b5n72") pod "eb50bf43-0d69-496f-b896-5d72a9c84664" (UID: "eb50bf43-0d69-496f-b896-5d72a9c84664"). InnerVolumeSpecName "kube-api-access-b5n72". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.893000 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb50bf43-0d69-496f-b896-5d72a9c84664-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.893049 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5n72\" (UniqueName: \"kubernetes.io/projected/eb50bf43-0d69-496f-b896-5d72a9c84664-kube-api-access-b5n72\") on node \"crc\" DevicePath \"\"" Oct 08 07:15:02 crc kubenswrapper[4958]: I1008 07:15:02.893069 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb50bf43-0d69-496f-b896-5d72a9c84664-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:15:03 crc kubenswrapper[4958]: I1008 07:15:03.416393 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" event={"ID":"eb50bf43-0d69-496f-b896-5d72a9c84664","Type":"ContainerDied","Data":"508d216447a96aacf1bcb9568336eca4bf35bc246d7adb63fa45e2a6c8b231d7"} Oct 08 07:15:03 crc kubenswrapper[4958]: I1008 07:15:03.416443 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508d216447a96aacf1bcb9568336eca4bf35bc246d7adb63fa45e2a6c8b231d7" Oct 08 07:15:03 crc kubenswrapper[4958]: I1008 07:15:03.416816 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n" Oct 08 07:15:03 crc kubenswrapper[4958]: I1008 07:15:03.798842 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww"] Oct 08 07:15:03 crc kubenswrapper[4958]: I1008 07:15:03.805778 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331750-99jww"] Oct 08 07:15:05 crc kubenswrapper[4958]: I1008 07:15:05.604019 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b11ac2-2a27-4859-b9b2-595d2ca7556d" path="/var/lib/kubelet/pods/86b11ac2-2a27-4859-b9b2-595d2ca7556d/volumes" Oct 08 07:15:15 crc kubenswrapper[4958]: I1008 07:15:15.576912 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:15:15 crc kubenswrapper[4958]: E1008 07:15:15.578048 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:15:28 crc kubenswrapper[4958]: I1008 07:15:28.576166 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:15:28 crc kubenswrapper[4958]: E1008 07:15:28.577280 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:15:36 crc kubenswrapper[4958]: I1008 07:15:36.651781 4958 scope.go:117] "RemoveContainer" containerID="a673b4e562496da508b65c252464eeb031c8e8a1697e452cbd0ffef29062d9cd" Oct 08 07:15:39 crc kubenswrapper[4958]: I1008 07:15:39.577583 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:15:39 crc kubenswrapper[4958]: E1008 07:15:39.578512 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:15:50 crc kubenswrapper[4958]: I1008 07:15:50.576913 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:15:50 crc kubenswrapper[4958]: E1008 07:15:50.577973 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:16:03 crc kubenswrapper[4958]: I1008 07:16:03.577231 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:16:03 crc kubenswrapper[4958]: E1008 07:16:03.578131 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:16:17 crc kubenswrapper[4958]: I1008 07:16:17.585161 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:16:17 crc kubenswrapper[4958]: E1008 07:16:17.586638 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:16:29 crc kubenswrapper[4958]: I1008 07:16:29.576849 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:16:29 crc kubenswrapper[4958]: E1008 07:16:29.578831 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:16:44 crc kubenswrapper[4958]: I1008 07:16:44.576544 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:16:44 crc kubenswrapper[4958]: E1008 07:16:44.578442 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.543197 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9cbl7"] Oct 08 07:16:48 crc kubenswrapper[4958]: E1008 07:16:48.543881 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb50bf43-0d69-496f-b896-5d72a9c84664" containerName="collect-profiles" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.543894 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb50bf43-0d69-496f-b896-5d72a9c84664" containerName="collect-profiles" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.544124 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb50bf43-0d69-496f-b896-5d72a9c84664" containerName="collect-profiles" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.546121 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.562391 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9cbl7"] Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.617245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-catalog-content\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.617354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jvt\" (UniqueName: \"kubernetes.io/projected/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-kube-api-access-d6jvt\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.617391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-utilities\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.719794 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jvt\" (UniqueName: \"kubernetes.io/projected/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-kube-api-access-d6jvt\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.719869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-utilities\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.719989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-catalog-content\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.720585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-utilities\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.720638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-catalog-content\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.743170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jvt\" (UniqueName: \"kubernetes.io/projected/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-kube-api-access-d6jvt\") pod \"redhat-operators-9cbl7\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:48 crc kubenswrapper[4958]: I1008 07:16:48.869792 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:49 crc kubenswrapper[4958]: I1008 07:16:49.332601 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9cbl7"] Oct 08 07:16:49 crc kubenswrapper[4958]: W1008 07:16:49.338620 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d838c1_5ee4_4c35_9b2b_88a0cf49d82d.slice/crio-bf02e0d00d04e67a1cd67b55d1fb68652039fd605fd6584e4efed27243ac8591 WatchSource:0}: Error finding container bf02e0d00d04e67a1cd67b55d1fb68652039fd605fd6584e4efed27243ac8591: Status 404 returned error can't find the container with id bf02e0d00d04e67a1cd67b55d1fb68652039fd605fd6584e4efed27243ac8591 Oct 08 07:16:49 crc kubenswrapper[4958]: I1008 07:16:49.382581 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerStarted","Data":"bf02e0d00d04e67a1cd67b55d1fb68652039fd605fd6584e4efed27243ac8591"} Oct 08 07:16:50 crc kubenswrapper[4958]: I1008 07:16:50.396501 4958 generic.go:334] "Generic (PLEG): container finished" podID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerID="095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a" exitCode=0 Oct 08 07:16:50 crc kubenswrapper[4958]: I1008 07:16:50.396587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerDied","Data":"095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a"} Oct 08 07:16:50 crc kubenswrapper[4958]: I1008 07:16:50.399660 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:16:51 crc kubenswrapper[4958]: I1008 07:16:51.414198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerStarted","Data":"bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f"} Oct 08 07:16:52 crc kubenswrapper[4958]: I1008 07:16:52.427485 4958 generic.go:334] "Generic (PLEG): container finished" podID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerID="bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f" exitCode=0 Oct 08 07:16:52 crc kubenswrapper[4958]: I1008 07:16:52.427549 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerDied","Data":"bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f"} Oct 08 07:16:53 crc kubenswrapper[4958]: I1008 07:16:53.449731 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerStarted","Data":"b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999"} Oct 08 07:16:53 crc kubenswrapper[4958]: I1008 07:16:53.494455 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9cbl7" podStartSLOduration=2.964794527 podStartE2EDuration="5.494432536s" podCreationTimestamp="2025-10-08 07:16:48 +0000 UTC" firstStartedPulling="2025-10-08 07:16:50.399233704 +0000 UTC m=+2553.528926345" lastFinishedPulling="2025-10-08 07:16:52.928871713 +0000 UTC m=+2556.058564354" observedRunningTime="2025-10-08 07:16:53.490825479 +0000 UTC m=+2556.620518110" watchObservedRunningTime="2025-10-08 07:16:53.494432536 +0000 UTC m=+2556.624125177" Oct 08 07:16:55 crc kubenswrapper[4958]: I1008 07:16:55.577620 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:16:55 crc kubenswrapper[4958]: E1008 07:16:55.578494 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:16:58 crc kubenswrapper[4958]: I1008 07:16:58.870962 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:58 crc kubenswrapper[4958]: I1008 07:16:58.871292 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:16:59 crc kubenswrapper[4958]: I1008 07:16:59.946003 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9cbl7" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="registry-server" probeResult="failure" output=< Oct 08 07:16:59 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 07:16:59 crc kubenswrapper[4958]: > Oct 08 07:17:07 crc kubenswrapper[4958]: I1008 07:17:07.579433 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:17:07 crc kubenswrapper[4958]: E1008 07:17:07.579875 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:17:08 crc kubenswrapper[4958]: I1008 07:17:08.946860 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:17:09 crc kubenswrapper[4958]: I1008 07:17:09.036039 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:17:09 crc kubenswrapper[4958]: I1008 07:17:09.197589 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9cbl7"] Oct 08 07:17:10 crc kubenswrapper[4958]: I1008 07:17:10.617707 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9cbl7" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="registry-server" containerID="cri-o://b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999" gracePeriod=2 Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.146217 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.207181 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-catalog-content\") pod \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.207457 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jvt\" (UniqueName: \"kubernetes.io/projected/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-kube-api-access-d6jvt\") pod \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.207559 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-utilities\") pod \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\" (UID: \"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d\") " Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.209606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-utilities" (OuterVolumeSpecName: "utilities") pod "04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" (UID: "04d838c1-5ee4-4c35-9b2b-88a0cf49d82d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.216839 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-kube-api-access-d6jvt" (OuterVolumeSpecName: "kube-api-access-d6jvt") pod "04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" (UID: "04d838c1-5ee4-4c35-9b2b-88a0cf49d82d"). InnerVolumeSpecName "kube-api-access-d6jvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.309924 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jvt\" (UniqueName: \"kubernetes.io/projected/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-kube-api-access-d6jvt\") on node \"crc\" DevicePath \"\"" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.310000 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.321354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" (UID: "04d838c1-5ee4-4c35-9b2b-88a0cf49d82d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.411848 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.643553 4958 generic.go:334] "Generic (PLEG): container finished" podID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerID="b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999" exitCode=0 Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.643624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerDied","Data":"b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999"} Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.643734 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9cbl7" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.643686 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9cbl7" event={"ID":"04d838c1-5ee4-4c35-9b2b-88a0cf49d82d","Type":"ContainerDied","Data":"bf02e0d00d04e67a1cd67b55d1fb68652039fd605fd6584e4efed27243ac8591"} Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.643808 4958 scope.go:117] "RemoveContainer" containerID="b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.680158 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9cbl7"] Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.681701 4958 scope.go:117] "RemoveContainer" containerID="bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.692092 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9cbl7"] Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.712524 4958 scope.go:117] "RemoveContainer" containerID="095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.743063 4958 scope.go:117] "RemoveContainer" containerID="b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999" Oct 08 07:17:11 crc kubenswrapper[4958]: E1008 07:17:11.743732 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999\": container with ID starting with b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999 not found: ID does not exist" containerID="b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.743783 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999"} err="failed to get container status \"b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999\": rpc error: code = NotFound desc = could not find container \"b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999\": container with ID starting with b5b6a675141726177a5c89e3f895862d5010655308857b698c61bfcfaa301999 not found: ID does not exist" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.743815 4958 scope.go:117] "RemoveContainer" containerID="bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f" Oct 08 07:17:11 crc kubenswrapper[4958]: E1008 07:17:11.744427 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f\": container with ID starting with bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f not found: ID does not exist" containerID="bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.744458 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f"} err="failed to get container status \"bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f\": rpc error: code = NotFound desc = could not find container \"bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f\": container with ID starting with bc570a74bdd93384b5da9cb13e2b2e8d14d6135632d26cfa61c429f20a2e878f not found: ID does not exist" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.744473 4958 scope.go:117] "RemoveContainer" containerID="095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a" Oct 08 07:17:11 crc kubenswrapper[4958]: E1008 07:17:11.745083 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a\": container with ID starting with 095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a not found: ID does not exist" containerID="095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a" Oct 08 07:17:11 crc kubenswrapper[4958]: I1008 07:17:11.745150 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a"} err="failed to get container status \"095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a\": rpc error: code = NotFound desc = could not find container \"095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a\": container with ID starting with 095e352496bc2766a906d1abf2fb0ef17b6f4a0161cd938eceb8e1b642d63d2a not found: ID does not exist" Oct 08 07:17:13 crc kubenswrapper[4958]: I1008 07:17:13.593868 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" path="/var/lib/kubelet/pods/04d838c1-5ee4-4c35-9b2b-88a0cf49d82d/volumes" Oct 08 07:17:21 crc kubenswrapper[4958]: I1008 07:17:21.584452 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:17:21 crc kubenswrapper[4958]: E1008 07:17:21.585479 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:17:35 crc kubenswrapper[4958]: I1008 07:17:35.576501 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:17:35 crc kubenswrapper[4958]: E1008 07:17:35.577441 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:17:49 crc kubenswrapper[4958]: I1008 07:17:49.577306 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:17:49 crc kubenswrapper[4958]: E1008 07:17:49.578219 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:18:02 crc kubenswrapper[4958]: I1008 07:18:02.577536 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:18:02 crc kubenswrapper[4958]: E1008 07:18:02.578699 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:18:14 crc kubenswrapper[4958]: I1008 07:18:14.576697 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:18:14 crc kubenswrapper[4958]: E1008 07:18:14.577536 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:18:29 crc kubenswrapper[4958]: I1008 07:18:29.576558 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:18:29 crc kubenswrapper[4958]: E1008 07:18:29.577877 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:18:41 crc kubenswrapper[4958]: I1008 07:18:41.579868 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:18:41 crc kubenswrapper[4958]: E1008 07:18:41.581401 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:18:53 crc kubenswrapper[4958]: I1008 07:18:53.577164 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:18:53 crc kubenswrapper[4958]: E1008 07:18:53.578286 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:19:05 crc kubenswrapper[4958]: I1008 07:19:05.577778 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:19:05 crc kubenswrapper[4958]: E1008 07:19:05.578371 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:19:19 crc kubenswrapper[4958]: I1008 07:19:19.576682 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:19:19 crc kubenswrapper[4958]: E1008 07:19:19.577695 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:19:32 crc kubenswrapper[4958]: I1008 07:19:32.577049 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:19:32 crc kubenswrapper[4958]: E1008 07:19:32.577894 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.745005 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dw8t4"] Oct 08 07:19:35 crc kubenswrapper[4958]: E1008 07:19:35.745673 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="registry-server" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.745688 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="registry-server" Oct 08 07:19:35 crc kubenswrapper[4958]: E1008 07:19:35.745706 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="extract-utilities" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.745713 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="extract-utilities" Oct 08 07:19:35 crc kubenswrapper[4958]: E1008 07:19:35.745738 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="extract-content" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.745746 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="extract-content" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.745918 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d838c1-5ee4-4c35-9b2b-88a0cf49d82d" containerName="registry-server" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.747137 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.778072 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dw8t4"] Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.813790 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmmbl\" (UniqueName: \"kubernetes.io/projected/0e2be602-ba28-4e96-a558-3a73b88777e1-kube-api-access-hmmbl\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.813902 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-utilities\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.814039 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-catalog-content\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.915307 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-catalog-content\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.915414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmmbl\" (UniqueName: \"kubernetes.io/projected/0e2be602-ba28-4e96-a558-3a73b88777e1-kube-api-access-hmmbl\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.915460 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-utilities\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.915908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-catalog-content\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.916010 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-utilities\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:35 crc kubenswrapper[4958]: I1008 07:19:35.936567 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmmbl\" (UniqueName: \"kubernetes.io/projected/0e2be602-ba28-4e96-a558-3a73b88777e1-kube-api-access-hmmbl\") pod \"certified-operators-dw8t4\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:36 crc kubenswrapper[4958]: I1008 07:19:36.123337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:36 crc kubenswrapper[4958]: I1008 07:19:36.603842 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dw8t4"] Oct 08 07:19:37 crc kubenswrapper[4958]: I1008 07:19:37.089297 4958 generic.go:334] "Generic (PLEG): container finished" podID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerID="a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3" exitCode=0 Oct 08 07:19:37 crc kubenswrapper[4958]: I1008 07:19:37.089369 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerDied","Data":"a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3"} Oct 08 07:19:37 crc kubenswrapper[4958]: I1008 07:19:37.089417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerStarted","Data":"bab6494d29b0b92a0f42a05360e4bec9f6ae9612e55cb3e974f3c6db5e4446a2"} Oct 08 07:19:38 crc kubenswrapper[4958]: I1008 07:19:38.110300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerStarted","Data":"124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1"} Oct 08 07:19:39 crc kubenswrapper[4958]: I1008 07:19:39.125027 4958 generic.go:334] "Generic (PLEG): container finished" podID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerID="124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1" exitCode=0 Oct 08 07:19:39 crc kubenswrapper[4958]: I1008 07:19:39.125138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerDied","Data":"124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1"} Oct 08 07:19:40 crc kubenswrapper[4958]: I1008 07:19:40.139473 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerStarted","Data":"7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a"} Oct 08 07:19:40 crc kubenswrapper[4958]: I1008 07:19:40.172675 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dw8t4" podStartSLOduration=2.692764628 podStartE2EDuration="5.172655133s" podCreationTimestamp="2025-10-08 07:19:35 +0000 UTC" firstStartedPulling="2025-10-08 07:19:37.09229944 +0000 UTC m=+2720.221992081" lastFinishedPulling="2025-10-08 07:19:39.572189955 +0000 UTC m=+2722.701882586" observedRunningTime="2025-10-08 07:19:40.163847096 +0000 UTC m=+2723.293539727" watchObservedRunningTime="2025-10-08 07:19:40.172655133 +0000 UTC m=+2723.302347744" Oct 08 07:19:43 crc kubenswrapper[4958]: I1008 07:19:43.576810 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:19:44 crc kubenswrapper[4958]: I1008 07:19:44.191087 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"b788309df178ef4881230b7210a3b31b59c893e8712de36c28a315be104affab"} Oct 08 07:19:46 crc kubenswrapper[4958]: I1008 07:19:46.124314 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:46 crc kubenswrapper[4958]: I1008 07:19:46.124790 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:46 crc kubenswrapper[4958]: I1008 07:19:46.201741 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:46 crc kubenswrapper[4958]: I1008 07:19:46.259846 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:46 crc kubenswrapper[4958]: I1008 07:19:46.446131 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dw8t4"] Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.228755 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dw8t4" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="registry-server" containerID="cri-o://7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a" gracePeriod=2 Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.706713 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.828502 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-catalog-content\") pod \"0e2be602-ba28-4e96-a558-3a73b88777e1\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.828595 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-utilities\") pod \"0e2be602-ba28-4e96-a558-3a73b88777e1\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.828647 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmmbl\" (UniqueName: \"kubernetes.io/projected/0e2be602-ba28-4e96-a558-3a73b88777e1-kube-api-access-hmmbl\") pod \"0e2be602-ba28-4e96-a558-3a73b88777e1\" (UID: \"0e2be602-ba28-4e96-a558-3a73b88777e1\") " Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.829759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-utilities" (OuterVolumeSpecName: "utilities") pod "0e2be602-ba28-4e96-a558-3a73b88777e1" (UID: "0e2be602-ba28-4e96-a558-3a73b88777e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.836864 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2be602-ba28-4e96-a558-3a73b88777e1-kube-api-access-hmmbl" (OuterVolumeSpecName: "kube-api-access-hmmbl") pod "0e2be602-ba28-4e96-a558-3a73b88777e1" (UID: "0e2be602-ba28-4e96-a558-3a73b88777e1"). InnerVolumeSpecName "kube-api-access-hmmbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.913217 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e2be602-ba28-4e96-a558-3a73b88777e1" (UID: "0e2be602-ba28-4e96-a558-3a73b88777e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.930763 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmmbl\" (UniqueName: \"kubernetes.io/projected/0e2be602-ba28-4e96-a558-3a73b88777e1-kube-api-access-hmmbl\") on node \"crc\" DevicePath \"\"" Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.930813 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:19:48 crc kubenswrapper[4958]: I1008 07:19:48.930827 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e2be602-ba28-4e96-a558-3a73b88777e1-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.245521 4958 generic.go:334] "Generic (PLEG): container finished" podID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerID="7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a" exitCode=0 Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.245594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerDied","Data":"7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a"} Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.245676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw8t4" event={"ID":"0e2be602-ba28-4e96-a558-3a73b88777e1","Type":"ContainerDied","Data":"bab6494d29b0b92a0f42a05360e4bec9f6ae9612e55cb3e974f3c6db5e4446a2"} Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.245678 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw8t4" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.245712 4958 scope.go:117] "RemoveContainer" containerID="7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.272111 4958 scope.go:117] "RemoveContainer" containerID="124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.293843 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dw8t4"] Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.299416 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dw8t4"] Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.333791 4958 scope.go:117] "RemoveContainer" containerID="a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.353527 4958 scope.go:117] "RemoveContainer" containerID="7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a" Oct 08 07:19:49 crc kubenswrapper[4958]: E1008 07:19:49.355984 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a\": container with ID starting with 7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a not found: ID does not exist" containerID="7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.356027 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a"} err="failed to get container status \"7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a\": rpc error: code = NotFound desc = could not find container \"7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a\": container with ID starting with 7eb5a9695c00e8e1a864debe7b2de67bc7637f54b347eaaeaf54c75736a3eb9a not found: ID does not exist" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.356060 4958 scope.go:117] "RemoveContainer" containerID="124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1" Oct 08 07:19:49 crc kubenswrapper[4958]: E1008 07:19:49.356488 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1\": container with ID starting with 124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1 not found: ID does not exist" containerID="124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.356516 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1"} err="failed to get container status \"124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1\": rpc error: code = NotFound desc = could not find container \"124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1\": container with ID starting with 124a0cf24217f14d008359cc96ad59cca5fdf3ba212b067a9526a9fe5f8f2ee1 not found: ID does not exist" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.356533 4958 scope.go:117] "RemoveContainer" containerID="a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3" Oct 08 07:19:49 crc kubenswrapper[4958]: E1008 07:19:49.356893 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3\": container with ID starting with a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3 not found: ID does not exist" containerID="a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.356926 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3"} err="failed to get container status \"a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3\": rpc error: code = NotFound desc = could not find container \"a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3\": container with ID starting with a49b18a7c07e83ad59368902e135f1bddf2f9afd7c9d671b1a5c6b9f15c32ad3 not found: ID does not exist" Oct 08 07:19:49 crc kubenswrapper[4958]: I1008 07:19:49.588228 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" path="/var/lib/kubelet/pods/0e2be602-ba28-4e96-a558-3a73b88777e1/volumes" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.853631 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2zcpx"] Oct 08 07:21:55 crc kubenswrapper[4958]: E1008 07:21:55.858385 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="extract-content" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.858421 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="extract-content" Oct 08 07:21:55 crc kubenswrapper[4958]: E1008 07:21:55.858447 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="extract-utilities" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.858460 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="extract-utilities" Oct 08 07:21:55 crc kubenswrapper[4958]: E1008 07:21:55.858505 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="registry-server" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.858517 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="registry-server" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.859115 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2be602-ba28-4e96-a558-3a73b88777e1" containerName="registry-server" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.860978 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:55 crc kubenswrapper[4958]: I1008 07:21:55.870738 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zcpx"] Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.050795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wghb\" (UniqueName: \"kubernetes.io/projected/4d9c9501-682f-46fa-bdfd-75687909c299-kube-api-access-2wghb\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.050977 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-catalog-content\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.051084 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-utilities\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.152223 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-catalog-content\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.152309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-utilities\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.153056 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-utilities\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.153142 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-catalog-content\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.153787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wghb\" (UniqueName: \"kubernetes.io/projected/4d9c9501-682f-46fa-bdfd-75687909c299-kube-api-access-2wghb\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.177423 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wghb\" (UniqueName: \"kubernetes.io/projected/4d9c9501-682f-46fa-bdfd-75687909c299-kube-api-access-2wghb\") pod \"redhat-marketplace-2zcpx\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.195599 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:21:56 crc kubenswrapper[4958]: I1008 07:21:56.715370 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zcpx"] Oct 08 07:21:57 crc kubenswrapper[4958]: I1008 07:21:57.509770 4958 generic.go:334] "Generic (PLEG): container finished" podID="4d9c9501-682f-46fa-bdfd-75687909c299" containerID="4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1" exitCode=0 Oct 08 07:21:57 crc kubenswrapper[4958]: I1008 07:21:57.509887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zcpx" event={"ID":"4d9c9501-682f-46fa-bdfd-75687909c299","Type":"ContainerDied","Data":"4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1"} Oct 08 07:21:57 crc kubenswrapper[4958]: I1008 07:21:57.512132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zcpx" event={"ID":"4d9c9501-682f-46fa-bdfd-75687909c299","Type":"ContainerStarted","Data":"5d5a91e9a8c791b0ba47bb151f2ca609a7939b5f3bdabdde47c13537dcfc7d71"} Oct 08 07:21:57 crc kubenswrapper[4958]: I1008 07:21:57.514739 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:21:59 crc kubenswrapper[4958]: I1008 07:21:59.533426 4958 generic.go:334] "Generic (PLEG): container finished" podID="4d9c9501-682f-46fa-bdfd-75687909c299" containerID="79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf" exitCode=0 Oct 08 07:21:59 crc kubenswrapper[4958]: I1008 07:21:59.533492 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zcpx" event={"ID":"4d9c9501-682f-46fa-bdfd-75687909c299","Type":"ContainerDied","Data":"79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf"} Oct 08 07:22:00 crc kubenswrapper[4958]: I1008 07:22:00.547083 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zcpx" event={"ID":"4d9c9501-682f-46fa-bdfd-75687909c299","Type":"ContainerStarted","Data":"e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986"} Oct 08 07:22:00 crc kubenswrapper[4958]: I1008 07:22:00.572208 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2zcpx" podStartSLOduration=3.096342479 podStartE2EDuration="5.572174703s" podCreationTimestamp="2025-10-08 07:21:55 +0000 UTC" firstStartedPulling="2025-10-08 07:21:57.514169583 +0000 UTC m=+2860.643862214" lastFinishedPulling="2025-10-08 07:21:59.990001807 +0000 UTC m=+2863.119694438" observedRunningTime="2025-10-08 07:22:00.567755443 +0000 UTC m=+2863.697448084" watchObservedRunningTime="2025-10-08 07:22:00.572174703 +0000 UTC m=+2863.701867384" Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.197411 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.198057 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.280574 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.678613 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.745557 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zcpx"] Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.845509 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:22:06 crc kubenswrapper[4958]: I1008 07:22:06.845602 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:22:08 crc kubenswrapper[4958]: I1008 07:22:08.625181 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2zcpx" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="registry-server" containerID="cri-o://e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986" gracePeriod=2 Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.137160 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.211213 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wghb\" (UniqueName: \"kubernetes.io/projected/4d9c9501-682f-46fa-bdfd-75687909c299-kube-api-access-2wghb\") pod \"4d9c9501-682f-46fa-bdfd-75687909c299\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.211311 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-utilities\") pod \"4d9c9501-682f-46fa-bdfd-75687909c299\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.211376 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-catalog-content\") pod \"4d9c9501-682f-46fa-bdfd-75687909c299\" (UID: \"4d9c9501-682f-46fa-bdfd-75687909c299\") " Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.213567 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-utilities" (OuterVolumeSpecName: "utilities") pod "4d9c9501-682f-46fa-bdfd-75687909c299" (UID: "4d9c9501-682f-46fa-bdfd-75687909c299"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.219529 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9c9501-682f-46fa-bdfd-75687909c299-kube-api-access-2wghb" (OuterVolumeSpecName: "kube-api-access-2wghb") pod "4d9c9501-682f-46fa-bdfd-75687909c299" (UID: "4d9c9501-682f-46fa-bdfd-75687909c299"). InnerVolumeSpecName "kube-api-access-2wghb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.233543 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d9c9501-682f-46fa-bdfd-75687909c299" (UID: "4d9c9501-682f-46fa-bdfd-75687909c299"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.313774 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wghb\" (UniqueName: \"kubernetes.io/projected/4d9c9501-682f-46fa-bdfd-75687909c299-kube-api-access-2wghb\") on node \"crc\" DevicePath \"\"" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.313811 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.313821 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9c9501-682f-46fa-bdfd-75687909c299-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.640376 4958 generic.go:334] "Generic (PLEG): container finished" podID="4d9c9501-682f-46fa-bdfd-75687909c299" containerID="e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986" exitCode=0 Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.640448 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zcpx" event={"ID":"4d9c9501-682f-46fa-bdfd-75687909c299","Type":"ContainerDied","Data":"e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986"} Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.641167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zcpx" event={"ID":"4d9c9501-682f-46fa-bdfd-75687909c299","Type":"ContainerDied","Data":"5d5a91e9a8c791b0ba47bb151f2ca609a7939b5f3bdabdde47c13537dcfc7d71"} Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.641209 4958 scope.go:117] "RemoveContainer" containerID="e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.640480 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zcpx" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.679644 4958 scope.go:117] "RemoveContainer" containerID="79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.714456 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zcpx"] Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.714561 4958 scope.go:117] "RemoveContainer" containerID="4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.731090 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zcpx"] Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.747282 4958 scope.go:117] "RemoveContainer" containerID="e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986" Oct 08 07:22:09 crc kubenswrapper[4958]: E1008 07:22:09.747831 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986\": container with ID starting with e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986 not found: ID does not exist" containerID="e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.747877 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986"} err="failed to get container status \"e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986\": rpc error: code = NotFound desc = could not find container \"e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986\": container with ID starting with e062ab4b54a3e99c1594fb29a670ad4acd16987cdd746eec8b21e099ae47a986 not found: ID does not exist" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.747910 4958 scope.go:117] "RemoveContainer" containerID="79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf" Oct 08 07:22:09 crc kubenswrapper[4958]: E1008 07:22:09.748300 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf\": container with ID starting with 79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf not found: ID does not exist" containerID="79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.748351 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf"} err="failed to get container status \"79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf\": rpc error: code = NotFound desc = could not find container \"79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf\": container with ID starting with 79523f69a1a42a093b3666780cb87dc0e420c1dc4539a37256b53ac1041ddbcf not found: ID does not exist" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.748387 4958 scope.go:117] "RemoveContainer" containerID="4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1" Oct 08 07:22:09 crc kubenswrapper[4958]: E1008 07:22:09.748662 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1\": container with ID starting with 4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1 not found: ID does not exist" containerID="4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1" Oct 08 07:22:09 crc kubenswrapper[4958]: I1008 07:22:09.748689 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1"} err="failed to get container status \"4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1\": rpc error: code = NotFound desc = could not find container \"4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1\": container with ID starting with 4aebca741c9e65ae0a86c32cc72d263cb6fdd0ebe940c35a558c3fe5d5724dd1 not found: ID does not exist" Oct 08 07:22:11 crc kubenswrapper[4958]: I1008 07:22:11.594465 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" path="/var/lib/kubelet/pods/4d9c9501-682f-46fa-bdfd-75687909c299/volumes" Oct 08 07:22:36 crc kubenswrapper[4958]: I1008 07:22:36.844636 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:22:36 crc kubenswrapper[4958]: I1008 07:22:36.845284 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:23:06 crc kubenswrapper[4958]: I1008 07:23:06.845413 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:23:06 crc kubenswrapper[4958]: I1008 07:23:06.845864 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:23:06 crc kubenswrapper[4958]: I1008 07:23:06.845906 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:23:06 crc kubenswrapper[4958]: I1008 07:23:06.846433 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b788309df178ef4881230b7210a3b31b59c893e8712de36c28a315be104affab"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:23:06 crc kubenswrapper[4958]: I1008 07:23:06.846488 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://b788309df178ef4881230b7210a3b31b59c893e8712de36c28a315be104affab" gracePeriod=600 Oct 08 07:23:07 crc kubenswrapper[4958]: I1008 07:23:07.201142 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="b788309df178ef4881230b7210a3b31b59c893e8712de36c28a315be104affab" exitCode=0 Oct 08 07:23:07 crc kubenswrapper[4958]: I1008 07:23:07.201223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"b788309df178ef4881230b7210a3b31b59c893e8712de36c28a315be104affab"} Oct 08 07:23:07 crc kubenswrapper[4958]: I1008 07:23:07.201555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e"} Oct 08 07:23:07 crc kubenswrapper[4958]: I1008 07:23:07.201585 4958 scope.go:117] "RemoveContainer" containerID="732d7d4822d9d40e5727c631660abb36bf31a989f96467c753d25fc19e358575" Oct 08 07:25:36 crc kubenswrapper[4958]: I1008 07:25:36.845060 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:25:36 crc kubenswrapper[4958]: I1008 07:25:36.845655 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.297766 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2f5d7"] Oct 08 07:25:38 crc kubenswrapper[4958]: E1008 07:25:38.298344 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="extract-content" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.298374 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="extract-content" Oct 08 07:25:38 crc kubenswrapper[4958]: E1008 07:25:38.298455 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="extract-utilities" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.298474 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="extract-utilities" Oct 08 07:25:38 crc kubenswrapper[4958]: E1008 07:25:38.298498 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="registry-server" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.298515 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="registry-server" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.298892 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9c9501-682f-46fa-bdfd-75687909c299" containerName="registry-server" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.301125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.308934 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f5d7"] Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.447906 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7z95\" (UniqueName: \"kubernetes.io/projected/20b61bc3-9b1d-4319-9544-9d1b39cddaad-kube-api-access-j7z95\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.448199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-catalog-content\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.448308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-utilities\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.549774 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7z95\" (UniqueName: \"kubernetes.io/projected/20b61bc3-9b1d-4319-9544-9d1b39cddaad-kube-api-access-j7z95\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.549879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-catalog-content\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.549910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-utilities\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.550616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-utilities\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.550742 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-catalog-content\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.572488 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7z95\" (UniqueName: \"kubernetes.io/projected/20b61bc3-9b1d-4319-9544-9d1b39cddaad-kube-api-access-j7z95\") pod \"community-operators-2f5d7\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.647689 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:38 crc kubenswrapper[4958]: I1008 07:25:38.958687 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f5d7"] Oct 08 07:25:39 crc kubenswrapper[4958]: I1008 07:25:39.648463 4958 generic.go:334] "Generic (PLEG): container finished" podID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerID="2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002" exitCode=0 Oct 08 07:25:39 crc kubenswrapper[4958]: I1008 07:25:39.648543 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f5d7" event={"ID":"20b61bc3-9b1d-4319-9544-9d1b39cddaad","Type":"ContainerDied","Data":"2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002"} Oct 08 07:25:39 crc kubenswrapper[4958]: I1008 07:25:39.649041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f5d7" event={"ID":"20b61bc3-9b1d-4319-9544-9d1b39cddaad","Type":"ContainerStarted","Data":"c5c7ca9c0269301fafd9a0352eddcdcc905759323047aea1a545e681a868c1e0"} Oct 08 07:25:41 crc kubenswrapper[4958]: I1008 07:25:41.666174 4958 generic.go:334] "Generic (PLEG): container finished" podID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerID="6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4" exitCode=0 Oct 08 07:25:41 crc kubenswrapper[4958]: I1008 07:25:41.666342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f5d7" event={"ID":"20b61bc3-9b1d-4319-9544-9d1b39cddaad","Type":"ContainerDied","Data":"6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4"} Oct 08 07:25:42 crc kubenswrapper[4958]: I1008 07:25:42.677452 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f5d7" event={"ID":"20b61bc3-9b1d-4319-9544-9d1b39cddaad","Type":"ContainerStarted","Data":"9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a"} Oct 08 07:25:42 crc kubenswrapper[4958]: I1008 07:25:42.710107 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2f5d7" podStartSLOduration=2.097049891 podStartE2EDuration="4.710070378s" podCreationTimestamp="2025-10-08 07:25:38 +0000 UTC" firstStartedPulling="2025-10-08 07:25:39.650006283 +0000 UTC m=+3082.779698894" lastFinishedPulling="2025-10-08 07:25:42.26302677 +0000 UTC m=+3085.392719381" observedRunningTime="2025-10-08 07:25:42.704706734 +0000 UTC m=+3085.834399415" watchObservedRunningTime="2025-10-08 07:25:42.710070378 +0000 UTC m=+3085.839763019" Oct 08 07:25:48 crc kubenswrapper[4958]: I1008 07:25:48.647991 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:48 crc kubenswrapper[4958]: I1008 07:25:48.648822 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:48 crc kubenswrapper[4958]: I1008 07:25:48.714620 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:48 crc kubenswrapper[4958]: I1008 07:25:48.805998 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:48 crc kubenswrapper[4958]: I1008 07:25:48.966588 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f5d7"] Oct 08 07:25:50 crc kubenswrapper[4958]: I1008 07:25:50.761560 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2f5d7" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="registry-server" containerID="cri-o://9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a" gracePeriod=2 Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.246238 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.370970 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-catalog-content\") pod \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.371057 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7z95\" (UniqueName: \"kubernetes.io/projected/20b61bc3-9b1d-4319-9544-9d1b39cddaad-kube-api-access-j7z95\") pod \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.371157 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-utilities\") pod \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\" (UID: \"20b61bc3-9b1d-4319-9544-9d1b39cddaad\") " Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.372264 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-utilities" (OuterVolumeSpecName: "utilities") pod "20b61bc3-9b1d-4319-9544-9d1b39cddaad" (UID: "20b61bc3-9b1d-4319-9544-9d1b39cddaad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.381663 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b61bc3-9b1d-4319-9544-9d1b39cddaad-kube-api-access-j7z95" (OuterVolumeSpecName: "kube-api-access-j7z95") pod "20b61bc3-9b1d-4319-9544-9d1b39cddaad" (UID: "20b61bc3-9b1d-4319-9544-9d1b39cddaad"). InnerVolumeSpecName "kube-api-access-j7z95". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.465878 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20b61bc3-9b1d-4319-9544-9d1b39cddaad" (UID: "20b61bc3-9b1d-4319-9544-9d1b39cddaad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.473720 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.473774 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20b61bc3-9b1d-4319-9544-9d1b39cddaad-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.473801 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7z95\" (UniqueName: \"kubernetes.io/projected/20b61bc3-9b1d-4319-9544-9d1b39cddaad-kube-api-access-j7z95\") on node \"crc\" DevicePath \"\"" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.786980 4958 generic.go:334] "Generic (PLEG): container finished" podID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerID="9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a" exitCode=0 Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.787042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f5d7" event={"ID":"20b61bc3-9b1d-4319-9544-9d1b39cddaad","Type":"ContainerDied","Data":"9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a"} Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.787087 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f5d7" event={"ID":"20b61bc3-9b1d-4319-9544-9d1b39cddaad","Type":"ContainerDied","Data":"c5c7ca9c0269301fafd9a0352eddcdcc905759323047aea1a545e681a868c1e0"} Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.787113 4958 scope.go:117] "RemoveContainer" containerID="9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.787133 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f5d7" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.823909 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f5d7"] Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.825807 4958 scope.go:117] "RemoveContainer" containerID="6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.834735 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2f5d7"] Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.854869 4958 scope.go:117] "RemoveContainer" containerID="2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.901265 4958 scope.go:117] "RemoveContainer" containerID="9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a" Oct 08 07:25:51 crc kubenswrapper[4958]: E1008 07:25:51.902169 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a\": container with ID starting with 9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a not found: ID does not exist" containerID="9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.902204 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a"} err="failed to get container status \"9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a\": rpc error: code = NotFound desc = could not find container \"9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a\": container with ID starting with 9026baef3974fc1ffcc542da13534071022d5b0083de1048c29a25d3dc27a58a not found: ID does not exist" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.902231 4958 scope.go:117] "RemoveContainer" containerID="6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4" Oct 08 07:25:51 crc kubenswrapper[4958]: E1008 07:25:51.902706 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4\": container with ID starting with 6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4 not found: ID does not exist" containerID="6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.902770 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4"} err="failed to get container status \"6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4\": rpc error: code = NotFound desc = could not find container \"6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4\": container with ID starting with 6eee3f57e355c968564a939f01c1734965723c6685a42f3219d3b3c7c3ec67f4 not found: ID does not exist" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.902813 4958 scope.go:117] "RemoveContainer" containerID="2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002" Oct 08 07:25:51 crc kubenswrapper[4958]: E1008 07:25:51.903389 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002\": container with ID starting with 2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002 not found: ID does not exist" containerID="2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002" Oct 08 07:25:51 crc kubenswrapper[4958]: I1008 07:25:51.903423 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002"} err="failed to get container status \"2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002\": rpc error: code = NotFound desc = could not find container \"2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002\": container with ID starting with 2a4e8472454203dfbf3f00d42593dc3e13eec9a8b25cd6b7a201ed022db78002 not found: ID does not exist" Oct 08 07:25:53 crc kubenswrapper[4958]: I1008 07:25:53.593279 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" path="/var/lib/kubelet/pods/20b61bc3-9b1d-4319-9544-9d1b39cddaad/volumes" Oct 08 07:26:06 crc kubenswrapper[4958]: I1008 07:26:06.845061 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:26:06 crc kubenswrapper[4958]: I1008 07:26:06.845802 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:26:36 crc kubenswrapper[4958]: I1008 07:26:36.845695 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:26:36 crc kubenswrapper[4958]: I1008 07:26:36.847197 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:26:36 crc kubenswrapper[4958]: I1008 07:26:36.847274 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:26:36 crc kubenswrapper[4958]: I1008 07:26:36.848103 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:26:36 crc kubenswrapper[4958]: I1008 07:26:36.848208 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" gracePeriod=600 Oct 08 07:26:36 crc kubenswrapper[4958]: E1008 07:26:36.980632 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:26:37 crc kubenswrapper[4958]: I1008 07:26:37.228462 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" exitCode=0 Oct 08 07:26:37 crc kubenswrapper[4958]: I1008 07:26:37.228525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e"} Oct 08 07:26:37 crc kubenswrapper[4958]: I1008 07:26:37.228575 4958 scope.go:117] "RemoveContainer" containerID="b788309df178ef4881230b7210a3b31b59c893e8712de36c28a315be104affab" Oct 08 07:26:37 crc kubenswrapper[4958]: I1008 07:26:37.229322 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:26:37 crc kubenswrapper[4958]: E1008 07:26:37.231122 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:26:49 crc kubenswrapper[4958]: I1008 07:26:49.576744 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:26:49 crc kubenswrapper[4958]: E1008 07:26:49.577411 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:27:01 crc kubenswrapper[4958]: I1008 07:27:01.577623 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:27:01 crc kubenswrapper[4958]: E1008 07:27:01.578778 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:27:12 crc kubenswrapper[4958]: I1008 07:27:12.576870 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:27:12 crc kubenswrapper[4958]: E1008 07:27:12.577625 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.061373 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvng4"] Oct 08 07:27:14 crc kubenswrapper[4958]: E1008 07:27:14.061795 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="extract-utilities" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.061813 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="extract-utilities" Oct 08 07:27:14 crc kubenswrapper[4958]: E1008 07:27:14.061876 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="extract-content" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.061889 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="extract-content" Oct 08 07:27:14 crc kubenswrapper[4958]: E1008 07:27:14.061910 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="registry-server" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.061922 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="registry-server" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.062230 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b61bc3-9b1d-4319-9544-9d1b39cddaad" containerName="registry-server" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.064132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.077531 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvng4"] Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.178133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-utilities\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.178203 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-catalog-content\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.178523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8n6\" (UniqueName: \"kubernetes.io/projected/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-kube-api-access-mf8n6\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.280143 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8n6\" (UniqueName: \"kubernetes.io/projected/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-kube-api-access-mf8n6\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.280235 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-utilities\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.280283 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-catalog-content\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.280927 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-utilities\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.281012 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-catalog-content\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.301623 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8n6\" (UniqueName: \"kubernetes.io/projected/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-kube-api-access-mf8n6\") pod \"redhat-operators-rvng4\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.402869 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:14 crc kubenswrapper[4958]: I1008 07:27:14.840029 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvng4"] Oct 08 07:27:14 crc kubenswrapper[4958]: W1008 07:27:14.855144 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62507d7c_a2df_44e3_85e0_ccbbf4256c8c.slice/crio-ee9d119d3ca90df8ad7147e87ac751615bb0da7de6f4741c9d1e69a24889dc75 WatchSource:0}: Error finding container ee9d119d3ca90df8ad7147e87ac751615bb0da7de6f4741c9d1e69a24889dc75: Status 404 returned error can't find the container with id ee9d119d3ca90df8ad7147e87ac751615bb0da7de6f4741c9d1e69a24889dc75 Oct 08 07:27:15 crc kubenswrapper[4958]: I1008 07:27:15.580243 4958 generic.go:334] "Generic (PLEG): container finished" podID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerID="e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc" exitCode=0 Oct 08 07:27:15 crc kubenswrapper[4958]: I1008 07:27:15.581926 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:27:15 crc kubenswrapper[4958]: I1008 07:27:15.590699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvng4" event={"ID":"62507d7c-a2df-44e3-85e0-ccbbf4256c8c","Type":"ContainerDied","Data":"e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc"} Oct 08 07:27:15 crc kubenswrapper[4958]: I1008 07:27:15.590738 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvng4" event={"ID":"62507d7c-a2df-44e3-85e0-ccbbf4256c8c","Type":"ContainerStarted","Data":"ee9d119d3ca90df8ad7147e87ac751615bb0da7de6f4741c9d1e69a24889dc75"} Oct 08 07:27:17 crc kubenswrapper[4958]: I1008 07:27:17.602863 4958 generic.go:334] "Generic (PLEG): container finished" podID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerID="dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb" exitCode=0 Oct 08 07:27:17 crc kubenswrapper[4958]: I1008 07:27:17.603092 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvng4" event={"ID":"62507d7c-a2df-44e3-85e0-ccbbf4256c8c","Type":"ContainerDied","Data":"dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb"} Oct 08 07:27:18 crc kubenswrapper[4958]: I1008 07:27:18.617804 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvng4" event={"ID":"62507d7c-a2df-44e3-85e0-ccbbf4256c8c","Type":"ContainerStarted","Data":"d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39"} Oct 08 07:27:18 crc kubenswrapper[4958]: I1008 07:27:18.648439 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvng4" podStartSLOduration=2.231047401 podStartE2EDuration="4.648411547s" podCreationTimestamp="2025-10-08 07:27:14 +0000 UTC" firstStartedPulling="2025-10-08 07:27:15.581732493 +0000 UTC m=+3178.711425094" lastFinishedPulling="2025-10-08 07:27:17.999096599 +0000 UTC m=+3181.128789240" observedRunningTime="2025-10-08 07:27:18.648092299 +0000 UTC m=+3181.777784940" watchObservedRunningTime="2025-10-08 07:27:18.648411547 +0000 UTC m=+3181.778104168" Oct 08 07:27:24 crc kubenswrapper[4958]: I1008 07:27:24.403371 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:24 crc kubenswrapper[4958]: I1008 07:27:24.404065 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:24 crc kubenswrapper[4958]: I1008 07:27:24.465103 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:24 crc kubenswrapper[4958]: I1008 07:27:24.769858 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:25 crc kubenswrapper[4958]: I1008 07:27:25.442202 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvng4"] Oct 08 07:27:25 crc kubenswrapper[4958]: I1008 07:27:25.577070 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:27:25 crc kubenswrapper[4958]: E1008 07:27:25.577455 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:27:26 crc kubenswrapper[4958]: I1008 07:27:26.716422 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvng4" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="registry-server" containerID="cri-o://d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39" gracePeriod=2 Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.159154 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.297807 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-utilities\") pod \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.297920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-catalog-content\") pod \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.298102 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8n6\" (UniqueName: \"kubernetes.io/projected/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-kube-api-access-mf8n6\") pod \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\" (UID: \"62507d7c-a2df-44e3-85e0-ccbbf4256c8c\") " Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.300265 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-utilities" (OuterVolumeSpecName: "utilities") pod "62507d7c-a2df-44e3-85e0-ccbbf4256c8c" (UID: "62507d7c-a2df-44e3-85e0-ccbbf4256c8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.305047 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-kube-api-access-mf8n6" (OuterVolumeSpecName: "kube-api-access-mf8n6") pod "62507d7c-a2df-44e3-85e0-ccbbf4256c8c" (UID: "62507d7c-a2df-44e3-85e0-ccbbf4256c8c"). InnerVolumeSpecName "kube-api-access-mf8n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.400739 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8n6\" (UniqueName: \"kubernetes.io/projected/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-kube-api-access-mf8n6\") on node \"crc\" DevicePath \"\"" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.400798 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.735590 4958 generic.go:334] "Generic (PLEG): container finished" podID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerID="d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39" exitCode=0 Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.735688 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvng4" event={"ID":"62507d7c-a2df-44e3-85e0-ccbbf4256c8c","Type":"ContainerDied","Data":"d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39"} Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.736075 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvng4" event={"ID":"62507d7c-a2df-44e3-85e0-ccbbf4256c8c","Type":"ContainerDied","Data":"ee9d119d3ca90df8ad7147e87ac751615bb0da7de6f4741c9d1e69a24889dc75"} Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.736113 4958 scope.go:117] "RemoveContainer" containerID="d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.735718 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvng4" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.770685 4958 scope.go:117] "RemoveContainer" containerID="dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.807392 4958 scope.go:117] "RemoveContainer" containerID="e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.847913 4958 scope.go:117] "RemoveContainer" containerID="d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39" Oct 08 07:27:27 crc kubenswrapper[4958]: E1008 07:27:27.848680 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39\": container with ID starting with d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39 not found: ID does not exist" containerID="d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.848734 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39"} err="failed to get container status \"d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39\": rpc error: code = NotFound desc = could not find container \"d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39\": container with ID starting with d6f04a5460e6676afe59daee938a3b300b17a1da8bf5bbe9bd67aef3fd835b39 not found: ID does not exist" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.848770 4958 scope.go:117] "RemoveContainer" containerID="dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb" Oct 08 07:27:27 crc kubenswrapper[4958]: E1008 07:27:27.849431 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb\": container with ID starting with dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb not found: ID does not exist" containerID="dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.849683 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb"} err="failed to get container status \"dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb\": rpc error: code = NotFound desc = could not find container \"dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb\": container with ID starting with dca7866fc493b2b26a980a7cc04d294c2c204687f84e3c7330467fa044174cfb not found: ID does not exist" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.849904 4958 scope.go:117] "RemoveContainer" containerID="e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc" Oct 08 07:27:27 crc kubenswrapper[4958]: E1008 07:27:27.850751 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc\": container with ID starting with e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc not found: ID does not exist" containerID="e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc" Oct 08 07:27:27 crc kubenswrapper[4958]: I1008 07:27:27.851017 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc"} err="failed to get container status \"e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc\": rpc error: code = NotFound desc = could not find container \"e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc\": container with ID starting with e5aff74ab70512bf9a29aa167af8e511d6cc560026842623c550426b285db5bc not found: ID does not exist" Oct 08 07:27:28 crc kubenswrapper[4958]: I1008 07:27:28.304803 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62507d7c-a2df-44e3-85e0-ccbbf4256c8c" (UID: "62507d7c-a2df-44e3-85e0-ccbbf4256c8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:27:28 crc kubenswrapper[4958]: I1008 07:27:28.315798 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62507d7c-a2df-44e3-85e0-ccbbf4256c8c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:27:28 crc kubenswrapper[4958]: I1008 07:27:28.384206 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvng4"] Oct 08 07:27:28 crc kubenswrapper[4958]: I1008 07:27:28.393338 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvng4"] Oct 08 07:27:29 crc kubenswrapper[4958]: I1008 07:27:29.589858 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" path="/var/lib/kubelet/pods/62507d7c-a2df-44e3-85e0-ccbbf4256c8c/volumes" Oct 08 07:27:37 crc kubenswrapper[4958]: I1008 07:27:37.585209 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:27:37 crc kubenswrapper[4958]: E1008 07:27:37.586485 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:27:48 crc kubenswrapper[4958]: I1008 07:27:48.576748 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:27:48 crc kubenswrapper[4958]: E1008 07:27:48.577912 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:28:03 crc kubenswrapper[4958]: I1008 07:28:03.577567 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:28:03 crc kubenswrapper[4958]: E1008 07:28:03.580342 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:28:14 crc kubenswrapper[4958]: I1008 07:28:14.576681 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:28:14 crc kubenswrapper[4958]: E1008 07:28:14.577404 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:28:26 crc kubenswrapper[4958]: I1008 07:28:26.577332 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:28:26 crc kubenswrapper[4958]: E1008 07:28:26.578008 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:28:37 crc kubenswrapper[4958]: I1008 07:28:37.583663 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:28:37 crc kubenswrapper[4958]: E1008 07:28:37.584153 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:28:50 crc kubenswrapper[4958]: I1008 07:28:50.576539 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:28:50 crc kubenswrapper[4958]: E1008 07:28:50.577418 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:29:05 crc kubenswrapper[4958]: I1008 07:29:05.577227 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:29:05 crc kubenswrapper[4958]: E1008 07:29:05.578131 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:29:20 crc kubenswrapper[4958]: I1008 07:29:20.576727 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:29:20 crc kubenswrapper[4958]: E1008 07:29:20.577471 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:29:34 crc kubenswrapper[4958]: I1008 07:29:34.576030 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:29:34 crc kubenswrapper[4958]: E1008 07:29:34.576732 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:29:46 crc kubenswrapper[4958]: I1008 07:29:46.577077 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:29:46 crc kubenswrapper[4958]: E1008 07:29:46.578027 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:29:58 crc kubenswrapper[4958]: I1008 07:29:58.576003 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:29:58 crc kubenswrapper[4958]: E1008 07:29:58.577353 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.215172 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg"] Oct 08 07:30:00 crc kubenswrapper[4958]: E1008 07:30:00.216652 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="extract-utilities" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.216803 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="extract-utilities" Oct 08 07:30:00 crc kubenswrapper[4958]: E1008 07:30:00.216924 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="extract-content" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.217065 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="extract-content" Oct 08 07:30:00 crc kubenswrapper[4958]: E1008 07:30:00.217174 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="registry-server" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.217271 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="registry-server" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.217568 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="62507d7c-a2df-44e3-85e0-ccbbf4256c8c" containerName="registry-server" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.218361 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.221360 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.221790 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.235612 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg"] Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.357200 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-secret-volume\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.357293 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hr7\" (UniqueName: \"kubernetes.io/projected/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-kube-api-access-t7hr7\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.357401 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-config-volume\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.458270 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-config-volume\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.458383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-secret-volume\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.458420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7hr7\" (UniqueName: \"kubernetes.io/projected/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-kube-api-access-t7hr7\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.459792 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-config-volume\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.469464 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-secret-volume\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.478565 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7hr7\" (UniqueName: \"kubernetes.io/projected/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-kube-api-access-t7hr7\") pod \"collect-profiles-29331810-cr5zg\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.548654 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:00 crc kubenswrapper[4958]: I1008 07:30:00.992164 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg"] Oct 08 07:30:01 crc kubenswrapper[4958]: W1008 07:30:01.003784 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod211143a2_0ef8_4ee6_8c2d_2e3ac0e9a8f4.slice/crio-d3f8afb3463b8304fff4e5b1156de24529bedea0d9ac45bdac1013f4e3aae69f WatchSource:0}: Error finding container d3f8afb3463b8304fff4e5b1156de24529bedea0d9ac45bdac1013f4e3aae69f: Status 404 returned error can't find the container with id d3f8afb3463b8304fff4e5b1156de24529bedea0d9ac45bdac1013f4e3aae69f Oct 08 07:30:01 crc kubenswrapper[4958]: I1008 07:30:01.190980 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" event={"ID":"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4","Type":"ContainerStarted","Data":"bb6cb9f529082e6f55a10481e7d79b85b96458d014c42fc78a57fe1f24d569b4"} Oct 08 07:30:01 crc kubenswrapper[4958]: I1008 07:30:01.191480 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" event={"ID":"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4","Type":"ContainerStarted","Data":"d3f8afb3463b8304fff4e5b1156de24529bedea0d9ac45bdac1013f4e3aae69f"} Oct 08 07:30:01 crc kubenswrapper[4958]: I1008 07:30:01.213088 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" podStartSLOduration=1.213062078 podStartE2EDuration="1.213062078s" podCreationTimestamp="2025-10-08 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:30:01.21092131 +0000 UTC m=+3344.340614001" watchObservedRunningTime="2025-10-08 07:30:01.213062078 +0000 UTC m=+3344.342754699" Oct 08 07:30:02 crc kubenswrapper[4958]: I1008 07:30:02.203626 4958 generic.go:334] "Generic (PLEG): container finished" podID="211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" containerID="bb6cb9f529082e6f55a10481e7d79b85b96458d014c42fc78a57fe1f24d569b4" exitCode=0 Oct 08 07:30:02 crc kubenswrapper[4958]: I1008 07:30:02.203677 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" event={"ID":"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4","Type":"ContainerDied","Data":"bb6cb9f529082e6f55a10481e7d79b85b96458d014c42fc78a57fe1f24d569b4"} Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.568067 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.707845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-config-volume\") pod \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.708228 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-secret-volume\") pod \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.708304 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7hr7\" (UniqueName: \"kubernetes.io/projected/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-kube-api-access-t7hr7\") pod \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\" (UID: \"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4\") " Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.710578 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" (UID: "211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.718377 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" (UID: "211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.719406 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-kube-api-access-t7hr7" (OuterVolumeSpecName: "kube-api-access-t7hr7") pod "211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" (UID: "211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4"). InnerVolumeSpecName "kube-api-access-t7hr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.812744 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.812794 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7hr7\" (UniqueName: \"kubernetes.io/projected/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-kube-api-access-t7hr7\") on node \"crc\" DevicePath \"\"" Oct 08 07:30:03 crc kubenswrapper[4958]: I1008 07:30:03.812809 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:30:04 crc kubenswrapper[4958]: I1008 07:30:04.224743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" event={"ID":"211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4","Type":"ContainerDied","Data":"d3f8afb3463b8304fff4e5b1156de24529bedea0d9ac45bdac1013f4e3aae69f"} Oct 08 07:30:04 crc kubenswrapper[4958]: I1008 07:30:04.224810 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f8afb3463b8304fff4e5b1156de24529bedea0d9ac45bdac1013f4e3aae69f" Oct 08 07:30:04 crc kubenswrapper[4958]: I1008 07:30:04.224823 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg" Oct 08 07:30:04 crc kubenswrapper[4958]: I1008 07:30:04.300367 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9"] Oct 08 07:30:04 crc kubenswrapper[4958]: I1008 07:30:04.306670 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331765-jnlr9"] Oct 08 07:30:05 crc kubenswrapper[4958]: I1008 07:30:05.593929 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c1e2e3-199c-4b0c-be9a-f47da27865fd" path="/var/lib/kubelet/pods/46c1e2e3-199c-4b0c-be9a-f47da27865fd/volumes" Oct 08 07:30:09 crc kubenswrapper[4958]: I1008 07:30:09.577232 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:30:09 crc kubenswrapper[4958]: E1008 07:30:09.580677 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.696429 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-twn7j"] Oct 08 07:30:11 crc kubenswrapper[4958]: E1008 07:30:11.697145 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" containerName="collect-profiles" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.697173 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" containerName="collect-profiles" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.697458 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" containerName="collect-profiles" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.699541 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.707234 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twn7j"] Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.842231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-utilities\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.842722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vt8\" (UniqueName: \"kubernetes.io/projected/4128a97e-1492-44cb-a44b-ee8a0008a858-kube-api-access-s8vt8\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.842930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-catalog-content\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.944642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-catalog-content\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.944733 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-utilities\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.944836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vt8\" (UniqueName: \"kubernetes.io/projected/4128a97e-1492-44cb-a44b-ee8a0008a858-kube-api-access-s8vt8\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.945244 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-catalog-content\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.945746 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-utilities\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:11 crc kubenswrapper[4958]: I1008 07:30:11.977037 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vt8\" (UniqueName: \"kubernetes.io/projected/4128a97e-1492-44cb-a44b-ee8a0008a858-kube-api-access-s8vt8\") pod \"certified-operators-twn7j\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:12 crc kubenswrapper[4958]: I1008 07:30:12.068921 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:12 crc kubenswrapper[4958]: I1008 07:30:12.581941 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twn7j"] Oct 08 07:30:12 crc kubenswrapper[4958]: W1008 07:30:12.597415 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4128a97e_1492_44cb_a44b_ee8a0008a858.slice/crio-dc0012abd472018c90fc4038dec965fffde4b4882789d261e3b8eeaa7dd7dc7a WatchSource:0}: Error finding container dc0012abd472018c90fc4038dec965fffde4b4882789d261e3b8eeaa7dd7dc7a: Status 404 returned error can't find the container with id dc0012abd472018c90fc4038dec965fffde4b4882789d261e3b8eeaa7dd7dc7a Oct 08 07:30:13 crc kubenswrapper[4958]: I1008 07:30:13.315665 4958 generic.go:334] "Generic (PLEG): container finished" podID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerID="c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05" exitCode=0 Oct 08 07:30:13 crc kubenswrapper[4958]: I1008 07:30:13.315711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerDied","Data":"c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05"} Oct 08 07:30:13 crc kubenswrapper[4958]: I1008 07:30:13.315741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerStarted","Data":"dc0012abd472018c90fc4038dec965fffde4b4882789d261e3b8eeaa7dd7dc7a"} Oct 08 07:30:14 crc kubenswrapper[4958]: I1008 07:30:14.326682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerStarted","Data":"efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907"} Oct 08 07:30:15 crc kubenswrapper[4958]: I1008 07:30:15.339239 4958 generic.go:334] "Generic (PLEG): container finished" podID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerID="efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907" exitCode=0 Oct 08 07:30:15 crc kubenswrapper[4958]: I1008 07:30:15.339356 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerDied","Data":"efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907"} Oct 08 07:30:16 crc kubenswrapper[4958]: I1008 07:30:16.351398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerStarted","Data":"fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f"} Oct 08 07:30:16 crc kubenswrapper[4958]: I1008 07:30:16.376827 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-twn7j" podStartSLOduration=2.927990746 podStartE2EDuration="5.376800632s" podCreationTimestamp="2025-10-08 07:30:11 +0000 UTC" firstStartedPulling="2025-10-08 07:30:13.319623003 +0000 UTC m=+3356.449315634" lastFinishedPulling="2025-10-08 07:30:15.768432889 +0000 UTC m=+3358.898125520" observedRunningTime="2025-10-08 07:30:16.37156394 +0000 UTC m=+3359.501256581" watchObservedRunningTime="2025-10-08 07:30:16.376800632 +0000 UTC m=+3359.506493263" Oct 08 07:30:22 crc kubenswrapper[4958]: I1008 07:30:22.069096 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:22 crc kubenswrapper[4958]: I1008 07:30:22.069572 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:22 crc kubenswrapper[4958]: I1008 07:30:22.134496 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:22 crc kubenswrapper[4958]: I1008 07:30:22.466641 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:22 crc kubenswrapper[4958]: I1008 07:30:22.520465 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twn7j"] Oct 08 07:30:23 crc kubenswrapper[4958]: I1008 07:30:23.577460 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:30:23 crc kubenswrapper[4958]: E1008 07:30:23.577841 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.431276 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-twn7j" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="registry-server" containerID="cri-o://fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f" gracePeriod=2 Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.882560 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.971076 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vt8\" (UniqueName: \"kubernetes.io/projected/4128a97e-1492-44cb-a44b-ee8a0008a858-kube-api-access-s8vt8\") pod \"4128a97e-1492-44cb-a44b-ee8a0008a858\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.971184 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-utilities\") pod \"4128a97e-1492-44cb-a44b-ee8a0008a858\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.971359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-catalog-content\") pod \"4128a97e-1492-44cb-a44b-ee8a0008a858\" (UID: \"4128a97e-1492-44cb-a44b-ee8a0008a858\") " Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.972980 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-utilities" (OuterVolumeSpecName: "utilities") pod "4128a97e-1492-44cb-a44b-ee8a0008a858" (UID: "4128a97e-1492-44cb-a44b-ee8a0008a858"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:30:24 crc kubenswrapper[4958]: I1008 07:30:24.977519 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4128a97e-1492-44cb-a44b-ee8a0008a858-kube-api-access-s8vt8" (OuterVolumeSpecName: "kube-api-access-s8vt8") pod "4128a97e-1492-44cb-a44b-ee8a0008a858" (UID: "4128a97e-1492-44cb-a44b-ee8a0008a858"). InnerVolumeSpecName "kube-api-access-s8vt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.075354 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8vt8\" (UniqueName: \"kubernetes.io/projected/4128a97e-1492-44cb-a44b-ee8a0008a858-kube-api-access-s8vt8\") on node \"crc\" DevicePath \"\"" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.075561 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.108549 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4128a97e-1492-44cb-a44b-ee8a0008a858" (UID: "4128a97e-1492-44cb-a44b-ee8a0008a858"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.178016 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4128a97e-1492-44cb-a44b-ee8a0008a858-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.441933 4958 generic.go:334] "Generic (PLEG): container finished" podID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerID="fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f" exitCode=0 Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.441992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerDied","Data":"fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f"} Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.442020 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twn7j" event={"ID":"4128a97e-1492-44cb-a44b-ee8a0008a858","Type":"ContainerDied","Data":"dc0012abd472018c90fc4038dec965fffde4b4882789d261e3b8eeaa7dd7dc7a"} Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.442039 4958 scope.go:117] "RemoveContainer" containerID="fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.442053 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twn7j" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.463989 4958 scope.go:117] "RemoveContainer" containerID="efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.490546 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twn7j"] Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.492187 4958 scope.go:117] "RemoveContainer" containerID="c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.497612 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-twn7j"] Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.523679 4958 scope.go:117] "RemoveContainer" containerID="fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f" Oct 08 07:30:25 crc kubenswrapper[4958]: E1008 07:30:25.524102 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f\": container with ID starting with fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f not found: ID does not exist" containerID="fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.524142 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f"} err="failed to get container status \"fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f\": rpc error: code = NotFound desc = could not find container \"fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f\": container with ID starting with fd77558baa64775dcc3ade0e296bf9cc816d533132d7ae13d32edb813a01b75f not found: ID does not exist" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.524167 4958 scope.go:117] "RemoveContainer" containerID="efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907" Oct 08 07:30:25 crc kubenswrapper[4958]: E1008 07:30:25.524409 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907\": container with ID starting with efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907 not found: ID does not exist" containerID="efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.524439 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907"} err="failed to get container status \"efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907\": rpc error: code = NotFound desc = could not find container \"efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907\": container with ID starting with efaaaae2f01c62de2b25a6147c51ac3e5c69a514076bfea3e65e7e2e9e373907 not found: ID does not exist" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.524457 4958 scope.go:117] "RemoveContainer" containerID="c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05" Oct 08 07:30:25 crc kubenswrapper[4958]: E1008 07:30:25.524675 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05\": container with ID starting with c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05 not found: ID does not exist" containerID="c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.524700 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05"} err="failed to get container status \"c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05\": rpc error: code = NotFound desc = could not find container \"c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05\": container with ID starting with c424d0528f4346e27c2ef3bea3c8be7dade9e87b939a51350e0283b2ba0d7d05 not found: ID does not exist" Oct 08 07:30:25 crc kubenswrapper[4958]: I1008 07:30:25.591248 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" path="/var/lib/kubelet/pods/4128a97e-1492-44cb-a44b-ee8a0008a858/volumes" Oct 08 07:30:34 crc kubenswrapper[4958]: I1008 07:30:34.576525 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:30:34 crc kubenswrapper[4958]: E1008 07:30:34.579388 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:30:37 crc kubenswrapper[4958]: I1008 07:30:37.093277 4958 scope.go:117] "RemoveContainer" containerID="63354289d7bbb216080433565b50f75d0c1634ab8787014cbc0467f69afad607" Oct 08 07:30:48 crc kubenswrapper[4958]: I1008 07:30:48.576383 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:30:48 crc kubenswrapper[4958]: E1008 07:30:48.577148 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:31:03 crc kubenswrapper[4958]: I1008 07:31:03.576868 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:31:03 crc kubenswrapper[4958]: E1008 07:31:03.577522 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:31:14 crc kubenswrapper[4958]: I1008 07:31:14.576907 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:31:14 crc kubenswrapper[4958]: E1008 07:31:14.577622 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:31:25 crc kubenswrapper[4958]: I1008 07:31:25.577826 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:31:25 crc kubenswrapper[4958]: E1008 07:31:25.578750 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:31:37 crc kubenswrapper[4958]: I1008 07:31:37.582135 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:31:38 crc kubenswrapper[4958]: I1008 07:31:38.106440 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"05d734ead9e898bbd5893b2e71378c00ebce2489a6705aa565ef86c3e47044d2"} Oct 08 07:34:06 crc kubenswrapper[4958]: I1008 07:34:06.844736 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:34:06 crc kubenswrapper[4958]: I1008 07:34:06.845359 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:34:36 crc kubenswrapper[4958]: I1008 07:34:36.845646 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:34:36 crc kubenswrapper[4958]: I1008 07:34:36.846454 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:35:06 crc kubenswrapper[4958]: I1008 07:35:06.845602 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:35:06 crc kubenswrapper[4958]: I1008 07:35:06.846387 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:35:06 crc kubenswrapper[4958]: I1008 07:35:06.846460 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:35:06 crc kubenswrapper[4958]: I1008 07:35:06.847534 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05d734ead9e898bbd5893b2e71378c00ebce2489a6705aa565ef86c3e47044d2"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:35:06 crc kubenswrapper[4958]: I1008 07:35:06.847658 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://05d734ead9e898bbd5893b2e71378c00ebce2489a6705aa565ef86c3e47044d2" gracePeriod=600 Oct 08 07:35:07 crc kubenswrapper[4958]: I1008 07:35:07.021290 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="05d734ead9e898bbd5893b2e71378c00ebce2489a6705aa565ef86c3e47044d2" exitCode=0 Oct 08 07:35:07 crc kubenswrapper[4958]: I1008 07:35:07.021379 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"05d734ead9e898bbd5893b2e71378c00ebce2489a6705aa565ef86c3e47044d2"} Oct 08 07:35:07 crc kubenswrapper[4958]: I1008 07:35:07.021484 4958 scope.go:117] "RemoveContainer" containerID="0e0bc78e6d12140dc6a9f1677af5c8ea2f31baad61727e0538c6032cfa0d367e" Oct 08 07:35:08 crc kubenswrapper[4958]: I1008 07:35:08.033741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901"} Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.936435 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kk8np"] Oct 08 07:36:39 crc kubenswrapper[4958]: E1008 07:36:39.937771 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="extract-content" Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.937794 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="extract-content" Oct 08 07:36:39 crc kubenswrapper[4958]: E1008 07:36:39.937815 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="registry-server" Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.937827 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="registry-server" Oct 08 07:36:39 crc kubenswrapper[4958]: E1008 07:36:39.937843 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="extract-utilities" Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.937859 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="extract-utilities" Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.938157 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4128a97e-1492-44cb-a44b-ee8a0008a858" containerName="registry-server" Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.939908 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:39 crc kubenswrapper[4958]: I1008 07:36:39.966470 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk8np"] Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.041901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-utilities\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.041980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-catalog-content\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.042013 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7m42\" (UniqueName: \"kubernetes.io/projected/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-kube-api-access-d7m42\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.143335 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7m42\" (UniqueName: \"kubernetes.io/projected/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-kube-api-access-d7m42\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.143482 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-utilities\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.143518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-catalog-content\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.144080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-catalog-content\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.144483 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-utilities\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.173531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7m42\" (UniqueName: \"kubernetes.io/projected/5faf3f82-75c1-415c-bf3e-9c9e7340f3aa-kube-api-access-d7m42\") pod \"community-operators-kk8np\" (UID: \"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa\") " pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.271921 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.571450 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk8np"] Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.992446 4958 generic.go:334] "Generic (PLEG): container finished" podID="5faf3f82-75c1-415c-bf3e-9c9e7340f3aa" containerID="f7ef9d58cb5d8ea0ba6953d9662e38b4d33301fabace4c7e26a5fe534aa33598" exitCode=0 Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.992516 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk8np" event={"ID":"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa","Type":"ContainerDied","Data":"f7ef9d58cb5d8ea0ba6953d9662e38b4d33301fabace4c7e26a5fe534aa33598"} Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.992829 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk8np" event={"ID":"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa","Type":"ContainerStarted","Data":"0c7458347392dd42ba3446da8ef5b3c9d06ea6c8a7f7d1bbde02e0bb62155224"} Oct 08 07:36:40 crc kubenswrapper[4958]: I1008 07:36:40.996363 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:36:46 crc kubenswrapper[4958]: I1008 07:36:46.047232 4958 generic.go:334] "Generic (PLEG): container finished" podID="5faf3f82-75c1-415c-bf3e-9c9e7340f3aa" containerID="5e21134f517f48a90b624153c217e26fdb466483f3aea00069cbd06a2971d40e" exitCode=0 Oct 08 07:36:46 crc kubenswrapper[4958]: I1008 07:36:46.047361 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk8np" event={"ID":"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa","Type":"ContainerDied","Data":"5e21134f517f48a90b624153c217e26fdb466483f3aea00069cbd06a2971d40e"} Oct 08 07:36:47 crc kubenswrapper[4958]: I1008 07:36:47.067636 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kk8np" event={"ID":"5faf3f82-75c1-415c-bf3e-9c9e7340f3aa","Type":"ContainerStarted","Data":"da044b715704c3dbf684c149f06c8e6bd53cd2c1a79ccfdc3c7ad4d7524fe077"} Oct 08 07:36:50 crc kubenswrapper[4958]: I1008 07:36:50.272686 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:50 crc kubenswrapper[4958]: I1008 07:36:50.272923 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:50 crc kubenswrapper[4958]: I1008 07:36:50.321534 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:50 crc kubenswrapper[4958]: I1008 07:36:50.350274 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kk8np" podStartSLOduration=5.83676916 podStartE2EDuration="11.350249529s" podCreationTimestamp="2025-10-08 07:36:39 +0000 UTC" firstStartedPulling="2025-10-08 07:36:40.99589301 +0000 UTC m=+3744.125585651" lastFinishedPulling="2025-10-08 07:36:46.509373379 +0000 UTC m=+3749.639066020" observedRunningTime="2025-10-08 07:36:47.097723868 +0000 UTC m=+3750.227416489" watchObservedRunningTime="2025-10-08 07:36:50.350249529 +0000 UTC m=+3753.479942160" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.175738 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kk8np" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.261584 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kk8np"] Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.324160 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhsrr"] Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.325607 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhsrr" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="registry-server" containerID="cri-o://033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7" gracePeriod=2 Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.771772 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.840247 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xp2t\" (UniqueName: \"kubernetes.io/projected/4125d6f2-8177-45b6-9b72-ed00864ff1ea-kube-api-access-9xp2t\") pod \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.840540 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-utilities\") pod \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.840603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-catalog-content\") pod \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\" (UID: \"4125d6f2-8177-45b6-9b72-ed00864ff1ea\") " Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.841970 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-utilities" (OuterVolumeSpecName: "utilities") pod "4125d6f2-8177-45b6-9b72-ed00864ff1ea" (UID: "4125d6f2-8177-45b6-9b72-ed00864ff1ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.847244 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4125d6f2-8177-45b6-9b72-ed00864ff1ea-kube-api-access-9xp2t" (OuterVolumeSpecName: "kube-api-access-9xp2t") pod "4125d6f2-8177-45b6-9b72-ed00864ff1ea" (UID: "4125d6f2-8177-45b6-9b72-ed00864ff1ea"). InnerVolumeSpecName "kube-api-access-9xp2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.890518 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4125d6f2-8177-45b6-9b72-ed00864ff1ea" (UID: "4125d6f2-8177-45b6-9b72-ed00864ff1ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.942610 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.942653 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4125d6f2-8177-45b6-9b72-ed00864ff1ea-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:36:51 crc kubenswrapper[4958]: I1008 07:36:51.942665 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xp2t\" (UniqueName: \"kubernetes.io/projected/4125d6f2-8177-45b6-9b72-ed00864ff1ea-kube-api-access-9xp2t\") on node \"crc\" DevicePath \"\"" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.123837 4958 generic.go:334] "Generic (PLEG): container finished" podID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerID="033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7" exitCode=0 Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.123906 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhsrr" event={"ID":"4125d6f2-8177-45b6-9b72-ed00864ff1ea","Type":"ContainerDied","Data":"033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7"} Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.124206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhsrr" event={"ID":"4125d6f2-8177-45b6-9b72-ed00864ff1ea","Type":"ContainerDied","Data":"eda969cf7bc020d38b11d8cb509da278773bd0e7387d75039f04bcd28761745b"} Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.123921 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhsrr" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.124243 4958 scope.go:117] "RemoveContainer" containerID="033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.153939 4958 scope.go:117] "RemoveContainer" containerID="da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.173934 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhsrr"] Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.178835 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhsrr"] Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.191226 4958 scope.go:117] "RemoveContainer" containerID="94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.207700 4958 scope.go:117] "RemoveContainer" containerID="033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7" Oct 08 07:36:52 crc kubenswrapper[4958]: E1008 07:36:52.208382 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7\": container with ID starting with 033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7 not found: ID does not exist" containerID="033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.208412 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7"} err="failed to get container status \"033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7\": rpc error: code = NotFound desc = could not find container \"033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7\": container with ID starting with 033eb90ef9309e868c4aecde110e2fcbdc5b0ae2f6d788262598b66c7c723ff7 not found: ID does not exist" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.208435 4958 scope.go:117] "RemoveContainer" containerID="da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642" Oct 08 07:36:52 crc kubenswrapper[4958]: E1008 07:36:52.208814 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642\": container with ID starting with da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642 not found: ID does not exist" containerID="da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.208926 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642"} err="failed to get container status \"da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642\": rpc error: code = NotFound desc = could not find container \"da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642\": container with ID starting with da7759799d618bdb47735a5adc264bfe2152ca195990206c03f1827846a0b642 not found: ID does not exist" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.209033 4958 scope.go:117] "RemoveContainer" containerID="94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903" Oct 08 07:36:52 crc kubenswrapper[4958]: E1008 07:36:52.209395 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903\": container with ID starting with 94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903 not found: ID does not exist" containerID="94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903" Oct 08 07:36:52 crc kubenswrapper[4958]: I1008 07:36:52.209481 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903"} err="failed to get container status \"94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903\": rpc error: code = NotFound desc = could not find container \"94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903\": container with ID starting with 94cda1109a6130e680e2b9292bdbb56e2b8fe6098ba1e0150c9c66be80c2a903 not found: ID does not exist" Oct 08 07:36:53 crc kubenswrapper[4958]: I1008 07:36:53.598920 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" path="/var/lib/kubelet/pods/4125d6f2-8177-45b6-9b72-ed00864ff1ea/volumes" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.086549 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbnhk"] Oct 08 07:37:33 crc kubenswrapper[4958]: E1008 07:37:33.088030 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="extract-content" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.088067 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="extract-content" Oct 08 07:37:33 crc kubenswrapper[4958]: E1008 07:37:33.088092 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="registry-server" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.088111 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="registry-server" Oct 08 07:37:33 crc kubenswrapper[4958]: E1008 07:37:33.088172 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="extract-utilities" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.088191 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="extract-utilities" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.088573 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4125d6f2-8177-45b6-9b72-ed00864ff1ea" containerName="registry-server" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.092023 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.096622 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbnhk"] Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.149053 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-utilities\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.149175 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-catalog-content\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.149246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2lq7\" (UniqueName: \"kubernetes.io/projected/9d205aa1-24b8-4e34-9907-e30d7db45ff5-kube-api-access-g2lq7\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.250371 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-utilities\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.250487 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-catalog-content\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.250554 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2lq7\" (UniqueName: \"kubernetes.io/projected/9d205aa1-24b8-4e34-9907-e30d7db45ff5-kube-api-access-g2lq7\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.251412 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-utilities\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.251495 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-catalog-content\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.275991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2lq7\" (UniqueName: \"kubernetes.io/projected/9d205aa1-24b8-4e34-9907-e30d7db45ff5-kube-api-access-g2lq7\") pod \"redhat-operators-fbnhk\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.415259 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:33 crc kubenswrapper[4958]: I1008 07:37:33.867034 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbnhk"] Oct 08 07:37:34 crc kubenswrapper[4958]: I1008 07:37:34.537234 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerID="a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923" exitCode=0 Oct 08 07:37:34 crc kubenswrapper[4958]: I1008 07:37:34.537427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbnhk" event={"ID":"9d205aa1-24b8-4e34-9907-e30d7db45ff5","Type":"ContainerDied","Data":"a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923"} Oct 08 07:37:34 crc kubenswrapper[4958]: I1008 07:37:34.537660 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbnhk" event={"ID":"9d205aa1-24b8-4e34-9907-e30d7db45ff5","Type":"ContainerStarted","Data":"35f3aa1665abab8a52afdff49f18f6c83d663fe3f461d7998d2b9689fa0ab6b2"} Oct 08 07:37:36 crc kubenswrapper[4958]: I1008 07:37:36.558295 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerID="968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076" exitCode=0 Oct 08 07:37:36 crc kubenswrapper[4958]: I1008 07:37:36.558422 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbnhk" event={"ID":"9d205aa1-24b8-4e34-9907-e30d7db45ff5","Type":"ContainerDied","Data":"968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076"} Oct 08 07:37:36 crc kubenswrapper[4958]: I1008 07:37:36.844785 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:37:36 crc kubenswrapper[4958]: I1008 07:37:36.844900 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:37:37 crc kubenswrapper[4958]: I1008 07:37:37.572266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbnhk" event={"ID":"9d205aa1-24b8-4e34-9907-e30d7db45ff5","Type":"ContainerStarted","Data":"091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122"} Oct 08 07:37:37 crc kubenswrapper[4958]: I1008 07:37:37.609689 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbnhk" podStartSLOduration=1.9369427350000001 podStartE2EDuration="4.609660862s" podCreationTimestamp="2025-10-08 07:37:33 +0000 UTC" firstStartedPulling="2025-10-08 07:37:34.541918561 +0000 UTC m=+3797.671611202" lastFinishedPulling="2025-10-08 07:37:37.214636698 +0000 UTC m=+3800.344329329" observedRunningTime="2025-10-08 07:37:37.597171026 +0000 UTC m=+3800.726863667" watchObservedRunningTime="2025-10-08 07:37:37.609660862 +0000 UTC m=+3800.739353503" Oct 08 07:37:43 crc kubenswrapper[4958]: I1008 07:37:43.415713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:43 crc kubenswrapper[4958]: I1008 07:37:43.416558 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:44 crc kubenswrapper[4958]: I1008 07:37:44.497339 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fbnhk" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="registry-server" probeResult="failure" output=< Oct 08 07:37:44 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 07:37:44 crc kubenswrapper[4958]: > Oct 08 07:37:53 crc kubenswrapper[4958]: I1008 07:37:53.492675 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:53 crc kubenswrapper[4958]: I1008 07:37:53.572024 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:53 crc kubenswrapper[4958]: I1008 07:37:53.740622 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbnhk"] Oct 08 07:37:54 crc kubenswrapper[4958]: I1008 07:37:54.755459 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbnhk" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="registry-server" containerID="cri-o://091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122" gracePeriod=2 Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.208694 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.320149 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-utilities\") pod \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.320329 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-catalog-content\") pod \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.320545 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2lq7\" (UniqueName: \"kubernetes.io/projected/9d205aa1-24b8-4e34-9907-e30d7db45ff5-kube-api-access-g2lq7\") pod \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\" (UID: \"9d205aa1-24b8-4e34-9907-e30d7db45ff5\") " Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.321261 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-utilities" (OuterVolumeSpecName: "utilities") pod "9d205aa1-24b8-4e34-9907-e30d7db45ff5" (UID: "9d205aa1-24b8-4e34-9907-e30d7db45ff5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.328238 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d205aa1-24b8-4e34-9907-e30d7db45ff5-kube-api-access-g2lq7" (OuterVolumeSpecName: "kube-api-access-g2lq7") pod "9d205aa1-24b8-4e34-9907-e30d7db45ff5" (UID: "9d205aa1-24b8-4e34-9907-e30d7db45ff5"). InnerVolumeSpecName "kube-api-access-g2lq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.422548 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.422585 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2lq7\" (UniqueName: \"kubernetes.io/projected/9d205aa1-24b8-4e34-9907-e30d7db45ff5-kube-api-access-g2lq7\") on node \"crc\" DevicePath \"\"" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.427142 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d205aa1-24b8-4e34-9907-e30d7db45ff5" (UID: "9d205aa1-24b8-4e34-9907-e30d7db45ff5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.524359 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d205aa1-24b8-4e34-9907-e30d7db45ff5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.769482 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerID="091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122" exitCode=0 Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.769557 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbnhk" event={"ID":"9d205aa1-24b8-4e34-9907-e30d7db45ff5","Type":"ContainerDied","Data":"091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122"} Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.769597 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbnhk" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.769645 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbnhk" event={"ID":"9d205aa1-24b8-4e34-9907-e30d7db45ff5","Type":"ContainerDied","Data":"35f3aa1665abab8a52afdff49f18f6c83d663fe3f461d7998d2b9689fa0ab6b2"} Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.769679 4958 scope.go:117] "RemoveContainer" containerID="091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.802436 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbnhk"] Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.811138 4958 scope.go:117] "RemoveContainer" containerID="968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.812081 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbnhk"] Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.839252 4958 scope.go:117] "RemoveContainer" containerID="a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.887464 4958 scope.go:117] "RemoveContainer" containerID="091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122" Oct 08 07:37:55 crc kubenswrapper[4958]: E1008 07:37:55.888303 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122\": container with ID starting with 091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122 not found: ID does not exist" containerID="091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.888379 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122"} err="failed to get container status \"091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122\": rpc error: code = NotFound desc = could not find container \"091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122\": container with ID starting with 091d3c56c6ad71d3a9755c2d932b2bdb05542a3cf8e44128dd48bc9ff3a69122 not found: ID does not exist" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.888428 4958 scope.go:117] "RemoveContainer" containerID="968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076" Oct 08 07:37:55 crc kubenswrapper[4958]: E1008 07:37:55.889019 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076\": container with ID starting with 968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076 not found: ID does not exist" containerID="968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.889061 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076"} err="failed to get container status \"968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076\": rpc error: code = NotFound desc = could not find container \"968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076\": container with ID starting with 968b0f1668de8b1cec250a7618924de12d2feed120c232e432c5b1e13ee79076 not found: ID does not exist" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.889095 4958 scope.go:117] "RemoveContainer" containerID="a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923" Oct 08 07:37:55 crc kubenswrapper[4958]: E1008 07:37:55.889817 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923\": container with ID starting with a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923 not found: ID does not exist" containerID="a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923" Oct 08 07:37:55 crc kubenswrapper[4958]: I1008 07:37:55.889861 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923"} err="failed to get container status \"a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923\": rpc error: code = NotFound desc = could not find container \"a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923\": container with ID starting with a6c3333136747e4de0f15bffef6dc7f76162e4a0ad2b2217c1eed22c8c4e8923 not found: ID does not exist" Oct 08 07:37:57 crc kubenswrapper[4958]: I1008 07:37:57.588275 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" path="/var/lib/kubelet/pods/9d205aa1-24b8-4e34-9907-e30d7db45ff5/volumes" Oct 08 07:38:06 crc kubenswrapper[4958]: I1008 07:38:06.845361 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:38:06 crc kubenswrapper[4958]: I1008 07:38:06.846203 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:38:36 crc kubenswrapper[4958]: I1008 07:38:36.844543 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:38:36 crc kubenswrapper[4958]: I1008 07:38:36.845405 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:38:36 crc kubenswrapper[4958]: I1008 07:38:36.845495 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:38:36 crc kubenswrapper[4958]: I1008 07:38:36.846351 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:38:36 crc kubenswrapper[4958]: I1008 07:38:36.846436 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" gracePeriod=600 Oct 08 07:38:36 crc kubenswrapper[4958]: E1008 07:38:36.982741 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:38:37 crc kubenswrapper[4958]: I1008 07:38:37.155634 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" exitCode=0 Oct 08 07:38:37 crc kubenswrapper[4958]: I1008 07:38:37.155732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901"} Oct 08 07:38:37 crc kubenswrapper[4958]: I1008 07:38:37.155792 4958 scope.go:117] "RemoveContainer" containerID="05d734ead9e898bbd5893b2e71378c00ebce2489a6705aa565ef86c3e47044d2" Oct 08 07:38:37 crc kubenswrapper[4958]: I1008 07:38:37.156713 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:38:37 crc kubenswrapper[4958]: E1008 07:38:37.157233 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:38:48 crc kubenswrapper[4958]: I1008 07:38:48.576429 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:38:48 crc kubenswrapper[4958]: E1008 07:38:48.577666 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:39:00 crc kubenswrapper[4958]: I1008 07:39:00.576373 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:39:00 crc kubenswrapper[4958]: E1008 07:39:00.577464 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:39:11 crc kubenswrapper[4958]: I1008 07:39:11.576914 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:39:11 crc kubenswrapper[4958]: E1008 07:39:11.577899 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:39:24 crc kubenswrapper[4958]: I1008 07:39:24.577652 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:39:24 crc kubenswrapper[4958]: E1008 07:39:24.578823 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:39:36 crc kubenswrapper[4958]: I1008 07:39:36.577494 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:39:36 crc kubenswrapper[4958]: E1008 07:39:36.578506 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:39:51 crc kubenswrapper[4958]: I1008 07:39:51.576792 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:39:51 crc kubenswrapper[4958]: E1008 07:39:51.577737 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:40:04 crc kubenswrapper[4958]: I1008 07:40:04.576729 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:40:04 crc kubenswrapper[4958]: E1008 07:40:04.577608 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:40:15 crc kubenswrapper[4958]: I1008 07:40:15.577542 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:40:15 crc kubenswrapper[4958]: E1008 07:40:15.578639 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:40:27 crc kubenswrapper[4958]: I1008 07:40:27.586487 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:40:27 crc kubenswrapper[4958]: E1008 07:40:27.587598 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.065979 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4smmc"] Oct 08 07:40:40 crc kubenswrapper[4958]: E1008 07:40:40.067193 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="extract-utilities" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.067217 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="extract-utilities" Oct 08 07:40:40 crc kubenswrapper[4958]: E1008 07:40:40.067245 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="registry-server" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.067257 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="registry-server" Oct 08 07:40:40 crc kubenswrapper[4958]: E1008 07:40:40.067291 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="extract-content" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.067303 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="extract-content" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.067606 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d205aa1-24b8-4e34-9907-e30d7db45ff5" containerName="registry-server" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.069454 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.080843 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4smmc"] Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.168398 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-catalog-content\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.168606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67hs8\" (UniqueName: \"kubernetes.io/projected/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-kube-api-access-67hs8\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.168721 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-utilities\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.270534 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67hs8\" (UniqueName: \"kubernetes.io/projected/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-kube-api-access-67hs8\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.270653 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-utilities\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.270753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-catalog-content\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.271353 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-catalog-content\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.271616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-utilities\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.298745 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67hs8\" (UniqueName: \"kubernetes.io/projected/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-kube-api-access-67hs8\") pod \"redhat-marketplace-4smmc\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.404598 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.581046 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:40:40 crc kubenswrapper[4958]: E1008 07:40:40.581210 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:40:40 crc kubenswrapper[4958]: I1008 07:40:40.663445 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4smmc"] Oct 08 07:40:40 crc kubenswrapper[4958]: W1008 07:40:40.674467 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3b894c4_130e_408e_8aa4_8fd92b26ffc6.slice/crio-7d3bbe8b03794bc475854cc2b32a652bd34c1ac345bd1460d149cb2012612c75 WatchSource:0}: Error finding container 7d3bbe8b03794bc475854cc2b32a652bd34c1ac345bd1460d149cb2012612c75: Status 404 returned error can't find the container with id 7d3bbe8b03794bc475854cc2b32a652bd34c1ac345bd1460d149cb2012612c75 Oct 08 07:40:41 crc kubenswrapper[4958]: I1008 07:40:41.371074 4958 generic.go:334] "Generic (PLEG): container finished" podID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerID="a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88" exitCode=0 Oct 08 07:40:41 crc kubenswrapper[4958]: I1008 07:40:41.371159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4smmc" event={"ID":"d3b894c4-130e-408e-8aa4-8fd92b26ffc6","Type":"ContainerDied","Data":"a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88"} Oct 08 07:40:41 crc kubenswrapper[4958]: I1008 07:40:41.371221 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4smmc" event={"ID":"d3b894c4-130e-408e-8aa4-8fd92b26ffc6","Type":"ContainerStarted","Data":"7d3bbe8b03794bc475854cc2b32a652bd34c1ac345bd1460d149cb2012612c75"} Oct 08 07:40:43 crc kubenswrapper[4958]: I1008 07:40:43.390536 4958 generic.go:334] "Generic (PLEG): container finished" podID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerID="1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c" exitCode=0 Oct 08 07:40:43 crc kubenswrapper[4958]: I1008 07:40:43.390604 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4smmc" event={"ID":"d3b894c4-130e-408e-8aa4-8fd92b26ffc6","Type":"ContainerDied","Data":"1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c"} Oct 08 07:40:44 crc kubenswrapper[4958]: I1008 07:40:44.403897 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4smmc" event={"ID":"d3b894c4-130e-408e-8aa4-8fd92b26ffc6","Type":"ContainerStarted","Data":"a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147"} Oct 08 07:40:44 crc kubenswrapper[4958]: I1008 07:40:44.431627 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4smmc" podStartSLOduration=1.892919284 podStartE2EDuration="4.431603365s" podCreationTimestamp="2025-10-08 07:40:40 +0000 UTC" firstStartedPulling="2025-10-08 07:40:41.375159031 +0000 UTC m=+3984.504851632" lastFinishedPulling="2025-10-08 07:40:43.913843072 +0000 UTC m=+3987.043535713" observedRunningTime="2025-10-08 07:40:44.42254418 +0000 UTC m=+3987.552236781" watchObservedRunningTime="2025-10-08 07:40:44.431603365 +0000 UTC m=+3987.561295966" Oct 08 07:40:50 crc kubenswrapper[4958]: I1008 07:40:50.405422 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:50 crc kubenswrapper[4958]: I1008 07:40:50.406306 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:50 crc kubenswrapper[4958]: I1008 07:40:50.460834 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:50 crc kubenswrapper[4958]: I1008 07:40:50.508980 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:50 crc kubenswrapper[4958]: I1008 07:40:50.692186 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4smmc"] Oct 08 07:40:52 crc kubenswrapper[4958]: I1008 07:40:52.476280 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4smmc" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="registry-server" containerID="cri-o://a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147" gracePeriod=2 Oct 08 07:40:52 crc kubenswrapper[4958]: I1008 07:40:52.978889 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.072216 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-utilities\") pod \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.072267 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-catalog-content\") pod \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.072310 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67hs8\" (UniqueName: \"kubernetes.io/projected/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-kube-api-access-67hs8\") pod \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\" (UID: \"d3b894c4-130e-408e-8aa4-8fd92b26ffc6\") " Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.073225 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-utilities" (OuterVolumeSpecName: "utilities") pod "d3b894c4-130e-408e-8aa4-8fd92b26ffc6" (UID: "d3b894c4-130e-408e-8aa4-8fd92b26ffc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.090214 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-kube-api-access-67hs8" (OuterVolumeSpecName: "kube-api-access-67hs8") pod "d3b894c4-130e-408e-8aa4-8fd92b26ffc6" (UID: "d3b894c4-130e-408e-8aa4-8fd92b26ffc6"). InnerVolumeSpecName "kube-api-access-67hs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.106105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3b894c4-130e-408e-8aa4-8fd92b26ffc6" (UID: "d3b894c4-130e-408e-8aa4-8fd92b26ffc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.174031 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.174064 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.174075 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67hs8\" (UniqueName: \"kubernetes.io/projected/d3b894c4-130e-408e-8aa4-8fd92b26ffc6-kube-api-access-67hs8\") on node \"crc\" DevicePath \"\"" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.490872 4958 generic.go:334] "Generic (PLEG): container finished" podID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerID="a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147" exitCode=0 Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.490982 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4smmc" event={"ID":"d3b894c4-130e-408e-8aa4-8fd92b26ffc6","Type":"ContainerDied","Data":"a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147"} Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.491047 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4smmc" event={"ID":"d3b894c4-130e-408e-8aa4-8fd92b26ffc6","Type":"ContainerDied","Data":"7d3bbe8b03794bc475854cc2b32a652bd34c1ac345bd1460d149cb2012612c75"} Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.491080 4958 scope.go:117] "RemoveContainer" containerID="a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.491082 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4smmc" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.523990 4958 scope.go:117] "RemoveContainer" containerID="1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.543551 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4smmc"] Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.554469 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4smmc"] Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.574215 4958 scope.go:117] "RemoveContainer" containerID="a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.595500 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" path="/var/lib/kubelet/pods/d3b894c4-130e-408e-8aa4-8fd92b26ffc6/volumes" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.608731 4958 scope.go:117] "RemoveContainer" containerID="a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147" Oct 08 07:40:53 crc kubenswrapper[4958]: E1008 07:40:53.609837 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147\": container with ID starting with a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147 not found: ID does not exist" containerID="a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.609909 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147"} err="failed to get container status \"a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147\": rpc error: code = NotFound desc = could not find container \"a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147\": container with ID starting with a7db68dac896a62ebcd3c984fd4997172ff9af1d99992b1a6b8199bfa6b31147 not found: ID does not exist" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.609994 4958 scope.go:117] "RemoveContainer" containerID="1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c" Oct 08 07:40:53 crc kubenswrapper[4958]: E1008 07:40:53.610655 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c\": container with ID starting with 1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c not found: ID does not exist" containerID="1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.610714 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c"} err="failed to get container status \"1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c\": rpc error: code = NotFound desc = could not find container \"1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c\": container with ID starting with 1290dea08c5cbb4724287a81ac885e53da94d824521104808bb78da294c2894c not found: ID does not exist" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.610757 4958 scope.go:117] "RemoveContainer" containerID="a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88" Oct 08 07:40:53 crc kubenswrapper[4958]: E1008 07:40:53.611548 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88\": container with ID starting with a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88 not found: ID does not exist" containerID="a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88" Oct 08 07:40:53 crc kubenswrapper[4958]: I1008 07:40:53.611645 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88"} err="failed to get container status \"a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88\": rpc error: code = NotFound desc = could not find container \"a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88\": container with ID starting with a56d20b6a00f653d8265048433d10a630bdb07ec794ecf901e8eb8aec3190d88 not found: ID does not exist" Oct 08 07:40:54 crc kubenswrapper[4958]: I1008 07:40:54.576496 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:40:54 crc kubenswrapper[4958]: E1008 07:40:54.577260 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:41:06 crc kubenswrapper[4958]: I1008 07:41:06.577039 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:41:06 crc kubenswrapper[4958]: E1008 07:41:06.579363 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:41:19 crc kubenswrapper[4958]: I1008 07:41:19.576930 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:41:19 crc kubenswrapper[4958]: E1008 07:41:19.579273 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:41:34 crc kubenswrapper[4958]: I1008 07:41:34.577112 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:41:34 crc kubenswrapper[4958]: E1008 07:41:34.578433 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:41:48 crc kubenswrapper[4958]: I1008 07:41:48.577548 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:41:48 crc kubenswrapper[4958]: E1008 07:41:48.578301 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.466637 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mf56z"] Oct 08 07:41:58 crc kubenswrapper[4958]: E1008 07:41:58.470764 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="extract-content" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.470897 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="extract-content" Oct 08 07:41:58 crc kubenswrapper[4958]: E1008 07:41:58.471080 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="extract-utilities" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.471195 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="extract-utilities" Oct 08 07:41:58 crc kubenswrapper[4958]: E1008 07:41:58.471332 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="registry-server" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.471452 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="registry-server" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.471795 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b894c4-130e-408e-8aa4-8fd92b26ffc6" containerName="registry-server" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.476341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.490760 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mf56z"] Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.581922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-catalog-content\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.582064 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-utilities\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.582240 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jn8r\" (UniqueName: \"kubernetes.io/projected/2c4d9870-5b06-469e-b419-4ab854ffdbcd-kube-api-access-2jn8r\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.684311 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-catalog-content\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.684410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-utilities\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.684458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jn8r\" (UniqueName: \"kubernetes.io/projected/2c4d9870-5b06-469e-b419-4ab854ffdbcd-kube-api-access-2jn8r\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.685290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-utilities\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.685313 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-catalog-content\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.709097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jn8r\" (UniqueName: \"kubernetes.io/projected/2c4d9870-5b06-469e-b419-4ab854ffdbcd-kube-api-access-2jn8r\") pod \"certified-operators-mf56z\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:58 crc kubenswrapper[4958]: I1008 07:41:58.812913 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:41:59 crc kubenswrapper[4958]: I1008 07:41:59.322100 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mf56z"] Oct 08 07:42:00 crc kubenswrapper[4958]: I1008 07:42:00.165686 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerID="b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95" exitCode=0 Oct 08 07:42:00 crc kubenswrapper[4958]: I1008 07:42:00.165873 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf56z" event={"ID":"2c4d9870-5b06-469e-b419-4ab854ffdbcd","Type":"ContainerDied","Data":"b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95"} Oct 08 07:42:00 crc kubenswrapper[4958]: I1008 07:42:00.166107 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf56z" event={"ID":"2c4d9870-5b06-469e-b419-4ab854ffdbcd","Type":"ContainerStarted","Data":"7892e60f770d5838f25be8d5fa56f9c7e2957a898d611009b5a0abf96e63a82f"} Oct 08 07:42:00 crc kubenswrapper[4958]: I1008 07:42:00.170058 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:42:01 crc kubenswrapper[4958]: I1008 07:42:01.577127 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:42:01 crc kubenswrapper[4958]: E1008 07:42:01.578122 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:42:02 crc kubenswrapper[4958]: I1008 07:42:02.188512 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerID="5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5" exitCode=0 Oct 08 07:42:02 crc kubenswrapper[4958]: I1008 07:42:02.188586 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf56z" event={"ID":"2c4d9870-5b06-469e-b419-4ab854ffdbcd","Type":"ContainerDied","Data":"5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5"} Oct 08 07:42:03 crc kubenswrapper[4958]: I1008 07:42:03.199019 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf56z" event={"ID":"2c4d9870-5b06-469e-b419-4ab854ffdbcd","Type":"ContainerStarted","Data":"564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883"} Oct 08 07:42:03 crc kubenswrapper[4958]: I1008 07:42:03.229423 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mf56z" podStartSLOduration=2.771570032 podStartE2EDuration="5.229398046s" podCreationTimestamp="2025-10-08 07:41:58 +0000 UTC" firstStartedPulling="2025-10-08 07:42:00.169605741 +0000 UTC m=+4063.299298372" lastFinishedPulling="2025-10-08 07:42:02.627433745 +0000 UTC m=+4065.757126386" observedRunningTime="2025-10-08 07:42:03.227849084 +0000 UTC m=+4066.357541765" watchObservedRunningTime="2025-10-08 07:42:03.229398046 +0000 UTC m=+4066.359090677" Oct 08 07:42:08 crc kubenswrapper[4958]: I1008 07:42:08.813700 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:42:08 crc kubenswrapper[4958]: I1008 07:42:08.814363 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:42:08 crc kubenswrapper[4958]: I1008 07:42:08.899892 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:42:09 crc kubenswrapper[4958]: I1008 07:42:09.334389 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:42:09 crc kubenswrapper[4958]: I1008 07:42:09.401673 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mf56z"] Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.276833 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mf56z" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="registry-server" containerID="cri-o://564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883" gracePeriod=2 Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.753444 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.913376 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-catalog-content\") pod \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.913570 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-utilities\") pod \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.913593 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jn8r\" (UniqueName: \"kubernetes.io/projected/2c4d9870-5b06-469e-b419-4ab854ffdbcd-kube-api-access-2jn8r\") pod \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\" (UID: \"2c4d9870-5b06-469e-b419-4ab854ffdbcd\") " Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.920793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4d9870-5b06-469e-b419-4ab854ffdbcd-kube-api-access-2jn8r" (OuterVolumeSpecName: "kube-api-access-2jn8r") pod "2c4d9870-5b06-469e-b419-4ab854ffdbcd" (UID: "2c4d9870-5b06-469e-b419-4ab854ffdbcd"). InnerVolumeSpecName "kube-api-access-2jn8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:42:11 crc kubenswrapper[4958]: I1008 07:42:11.921850 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-utilities" (OuterVolumeSpecName: "utilities") pod "2c4d9870-5b06-469e-b419-4ab854ffdbcd" (UID: "2c4d9870-5b06-469e-b419-4ab854ffdbcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.016064 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.016122 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jn8r\" (UniqueName: \"kubernetes.io/projected/2c4d9870-5b06-469e-b419-4ab854ffdbcd-kube-api-access-2jn8r\") on node \"crc\" DevicePath \"\"" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.188421 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c4d9870-5b06-469e-b419-4ab854ffdbcd" (UID: "2c4d9870-5b06-469e-b419-4ab854ffdbcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.219148 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4d9870-5b06-469e-b419-4ab854ffdbcd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.289627 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerID="564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883" exitCode=0 Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.289712 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf56z" event={"ID":"2c4d9870-5b06-469e-b419-4ab854ffdbcd","Type":"ContainerDied","Data":"564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883"} Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.289737 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mf56z" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.289765 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mf56z" event={"ID":"2c4d9870-5b06-469e-b419-4ab854ffdbcd","Type":"ContainerDied","Data":"7892e60f770d5838f25be8d5fa56f9c7e2957a898d611009b5a0abf96e63a82f"} Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.289805 4958 scope.go:117] "RemoveContainer" containerID="564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.316444 4958 scope.go:117] "RemoveContainer" containerID="5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.338697 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mf56z"] Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.351301 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mf56z"] Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.366030 4958 scope.go:117] "RemoveContainer" containerID="b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.387879 4958 scope.go:117] "RemoveContainer" containerID="564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883" Oct 08 07:42:12 crc kubenswrapper[4958]: E1008 07:42:12.388303 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883\": container with ID starting with 564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883 not found: ID does not exist" containerID="564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.388365 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883"} err="failed to get container status \"564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883\": rpc error: code = NotFound desc = could not find container \"564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883\": container with ID starting with 564e85ae882886a5ef08a4fd0d994be631e6a721477ebeb3f76280b4b61f3883 not found: ID does not exist" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.388436 4958 scope.go:117] "RemoveContainer" containerID="5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5" Oct 08 07:42:12 crc kubenswrapper[4958]: E1008 07:42:12.389165 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5\": container with ID starting with 5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5 not found: ID does not exist" containerID="5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.389192 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5"} err="failed to get container status \"5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5\": rpc error: code = NotFound desc = could not find container \"5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5\": container with ID starting with 5aaa614d0451014a2efaa92a59ccbd3f67e2c33f8671fb7047e5fb31eaf4a7d5 not found: ID does not exist" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.389217 4958 scope.go:117] "RemoveContainer" containerID="b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95" Oct 08 07:42:12 crc kubenswrapper[4958]: E1008 07:42:12.389496 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95\": container with ID starting with b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95 not found: ID does not exist" containerID="b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95" Oct 08 07:42:12 crc kubenswrapper[4958]: I1008 07:42:12.389543 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95"} err="failed to get container status \"b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95\": rpc error: code = NotFound desc = could not find container \"b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95\": container with ID starting with b2bb8fd035fe126cd4a07004e7aab4ab36161f5e6a8215719ff4e961d2cddb95 not found: ID does not exist" Oct 08 07:42:13 crc kubenswrapper[4958]: I1008 07:42:13.598321 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" path="/var/lib/kubelet/pods/2c4d9870-5b06-469e-b419-4ab854ffdbcd/volumes" Oct 08 07:42:16 crc kubenswrapper[4958]: I1008 07:42:16.576922 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:42:16 crc kubenswrapper[4958]: E1008 07:42:16.578263 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:42:29 crc kubenswrapper[4958]: I1008 07:42:29.577061 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:42:29 crc kubenswrapper[4958]: E1008 07:42:29.577854 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:42:41 crc kubenswrapper[4958]: I1008 07:42:41.576417 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:42:41 crc kubenswrapper[4958]: E1008 07:42:41.577682 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:42:54 crc kubenswrapper[4958]: I1008 07:42:54.576187 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:42:54 crc kubenswrapper[4958]: E1008 07:42:54.577001 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:43:08 crc kubenswrapper[4958]: I1008 07:43:08.577805 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:43:08 crc kubenswrapper[4958]: E1008 07:43:08.579039 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:43:21 crc kubenswrapper[4958]: I1008 07:43:21.576196 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:43:21 crc kubenswrapper[4958]: E1008 07:43:21.577143 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:43:34 crc kubenswrapper[4958]: I1008 07:43:34.577434 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:43:34 crc kubenswrapper[4958]: E1008 07:43:34.578260 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:43:47 crc kubenswrapper[4958]: I1008 07:43:47.590161 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:43:48 crc kubenswrapper[4958]: I1008 07:43:48.181656 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"23c15f33914bd182ac7e9fb57f2b0d3cd41960e52d2608529d546bb75834d06c"} Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.169719 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p"] Oct 08 07:45:00 crc kubenswrapper[4958]: E1008 07:45:00.170699 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="registry-server" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.170717 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="registry-server" Oct 08 07:45:00 crc kubenswrapper[4958]: E1008 07:45:00.170740 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="extract-content" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.170749 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="extract-content" Oct 08 07:45:00 crc kubenswrapper[4958]: E1008 07:45:00.170762 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="extract-utilities" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.170771 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="extract-utilities" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.170987 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4d9870-5b06-469e-b419-4ab854ffdbcd" containerName="registry-server" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.171559 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.174057 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.177701 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.184161 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p"] Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.315481 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-secret-volume\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.315567 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkzww\" (UniqueName: \"kubernetes.io/projected/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-kube-api-access-nkzww\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.315610 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-config-volume\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.417013 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-secret-volume\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.417117 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkzww\" (UniqueName: \"kubernetes.io/projected/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-kube-api-access-nkzww\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.417177 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-config-volume\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.418732 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-config-volume\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.426744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-secret-volume\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.446003 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkzww\" (UniqueName: \"kubernetes.io/projected/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-kube-api-access-nkzww\") pod \"collect-profiles-29331825-vxz7p\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.503004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:00 crc kubenswrapper[4958]: I1008 07:45:00.968073 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p"] Oct 08 07:45:01 crc kubenswrapper[4958]: I1008 07:45:01.883912 4958 generic.go:334] "Generic (PLEG): container finished" podID="dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" containerID="55588398582d75047ea9c3f933e2be654e92ff94c9dd8bf384870134b08f655f" exitCode=0 Oct 08 07:45:01 crc kubenswrapper[4958]: I1008 07:45:01.884027 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" event={"ID":"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6","Type":"ContainerDied","Data":"55588398582d75047ea9c3f933e2be654e92ff94c9dd8bf384870134b08f655f"} Oct 08 07:45:01 crc kubenswrapper[4958]: I1008 07:45:01.884419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" event={"ID":"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6","Type":"ContainerStarted","Data":"67385e7698dccbcdbda091a0c20e6d60169b5e891a5a68c43f95cb731c8490e2"} Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.250750 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.359873 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkzww\" (UniqueName: \"kubernetes.io/projected/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-kube-api-access-nkzww\") pod \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.359917 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-secret-volume\") pod \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.359970 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-config-volume\") pod \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\" (UID: \"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6\") " Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.360767 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" (UID: "dc2b48cb-ed9c-4fff-91e3-7485b2b591d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.366231 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-kube-api-access-nkzww" (OuterVolumeSpecName: "kube-api-access-nkzww") pod "dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" (UID: "dc2b48cb-ed9c-4fff-91e3-7485b2b591d6"). InnerVolumeSpecName "kube-api-access-nkzww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.367424 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" (UID: "dc2b48cb-ed9c-4fff-91e3-7485b2b591d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.461835 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkzww\" (UniqueName: \"kubernetes.io/projected/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-kube-api-access-nkzww\") on node \"crc\" DevicePath \"\"" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.461892 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.461913 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.908349 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" event={"ID":"dc2b48cb-ed9c-4fff-91e3-7485b2b591d6","Type":"ContainerDied","Data":"67385e7698dccbcdbda091a0c20e6d60169b5e891a5a68c43f95cb731c8490e2"} Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.908450 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67385e7698dccbcdbda091a0c20e6d60169b5e891a5a68c43f95cb731c8490e2" Oct 08 07:45:03 crc kubenswrapper[4958]: I1008 07:45:03.908424 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p" Oct 08 07:45:04 crc kubenswrapper[4958]: I1008 07:45:04.363227 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw"] Oct 08 07:45:04 crc kubenswrapper[4958]: I1008 07:45:04.371768 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331780-9vdpw"] Oct 08 07:45:05 crc kubenswrapper[4958]: I1008 07:45:05.594503 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b" path="/var/lib/kubelet/pods/c8fbe0fa-1b0b-41f3-8463-548b51bd2b6b/volumes" Oct 08 07:45:37 crc kubenswrapper[4958]: I1008 07:45:37.521856 4958 scope.go:117] "RemoveContainer" containerID="1c119abcc4768138ea47c3d79c0ea306ac6280fd5806151f0b863b74fbb8c2f7" Oct 08 07:46:06 crc kubenswrapper[4958]: I1008 07:46:06.844596 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:46:06 crc kubenswrapper[4958]: I1008 07:46:06.845619 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:46:36 crc kubenswrapper[4958]: I1008 07:46:36.845611 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:46:36 crc kubenswrapper[4958]: I1008 07:46:36.846348 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:47:06 crc kubenswrapper[4958]: I1008 07:47:06.845312 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:47:06 crc kubenswrapper[4958]: I1008 07:47:06.846043 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:47:06 crc kubenswrapper[4958]: I1008 07:47:06.846125 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:47:06 crc kubenswrapper[4958]: I1008 07:47:06.847185 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23c15f33914bd182ac7e9fb57f2b0d3cd41960e52d2608529d546bb75834d06c"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:47:06 crc kubenswrapper[4958]: I1008 07:47:06.847285 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://23c15f33914bd182ac7e9fb57f2b0d3cd41960e52d2608529d546bb75834d06c" gracePeriod=600 Oct 08 07:47:07 crc kubenswrapper[4958]: I1008 07:47:07.064842 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="23c15f33914bd182ac7e9fb57f2b0d3cd41960e52d2608529d546bb75834d06c" exitCode=0 Oct 08 07:47:07 crc kubenswrapper[4958]: I1008 07:47:07.064914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"23c15f33914bd182ac7e9fb57f2b0d3cd41960e52d2608529d546bb75834d06c"} Oct 08 07:47:07 crc kubenswrapper[4958]: I1008 07:47:07.065018 4958 scope.go:117] "RemoveContainer" containerID="3c5aac7470829f27c06bc084bbba3bcfe3ccde96c5687affb371350d471d0901" Oct 08 07:47:08 crc kubenswrapper[4958]: I1008 07:47:08.072040 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d"} Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.476297 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bl2jp"] Oct 08 07:48:23 crc kubenswrapper[4958]: E1008 07:48:23.477562 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" containerName="collect-profiles" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.477587 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" containerName="collect-profiles" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.477876 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" containerName="collect-profiles" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.479714 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.500389 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl2jp"] Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.540809 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-utilities\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.540867 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-catalog-content\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.540899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s796z\" (UniqueName: \"kubernetes.io/projected/3fa6f175-384f-4a41-8818-ef5ab80d6286-kube-api-access-s796z\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.642190 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-utilities\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.642260 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-catalog-content\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.642288 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s796z\" (UniqueName: \"kubernetes.io/projected/3fa6f175-384f-4a41-8818-ef5ab80d6286-kube-api-access-s796z\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.643094 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-utilities\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.643142 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-catalog-content\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.671068 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s796z\" (UniqueName: \"kubernetes.io/projected/3fa6f175-384f-4a41-8818-ef5ab80d6286-kube-api-access-s796z\") pod \"redhat-operators-bl2jp\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:23 crc kubenswrapper[4958]: I1008 07:48:23.806085 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:24 crc kubenswrapper[4958]: I1008 07:48:24.049531 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl2jp"] Oct 08 07:48:24 crc kubenswrapper[4958]: I1008 07:48:24.793034 4958 generic.go:334] "Generic (PLEG): container finished" podID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerID="49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1" exitCode=0 Oct 08 07:48:24 crc kubenswrapper[4958]: I1008 07:48:24.793122 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl2jp" event={"ID":"3fa6f175-384f-4a41-8818-ef5ab80d6286","Type":"ContainerDied","Data":"49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1"} Oct 08 07:48:24 crc kubenswrapper[4958]: I1008 07:48:24.793418 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl2jp" event={"ID":"3fa6f175-384f-4a41-8818-ef5ab80d6286","Type":"ContainerStarted","Data":"0ea5a36db254c289aacb208a289ad235448e85b2e632148c64a3915f0ed0223f"} Oct 08 07:48:24 crc kubenswrapper[4958]: I1008 07:48:24.795152 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:48:26 crc kubenswrapper[4958]: I1008 07:48:26.819355 4958 generic.go:334] "Generic (PLEG): container finished" podID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerID="8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf" exitCode=0 Oct 08 07:48:26 crc kubenswrapper[4958]: I1008 07:48:26.819445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl2jp" event={"ID":"3fa6f175-384f-4a41-8818-ef5ab80d6286","Type":"ContainerDied","Data":"8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf"} Oct 08 07:48:27 crc kubenswrapper[4958]: I1008 07:48:27.832432 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl2jp" event={"ID":"3fa6f175-384f-4a41-8818-ef5ab80d6286","Type":"ContainerStarted","Data":"fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e"} Oct 08 07:48:27 crc kubenswrapper[4958]: I1008 07:48:27.862090 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bl2jp" podStartSLOduration=2.425552416 podStartE2EDuration="4.862073333s" podCreationTimestamp="2025-10-08 07:48:23 +0000 UTC" firstStartedPulling="2025-10-08 07:48:24.794734554 +0000 UTC m=+4447.924427195" lastFinishedPulling="2025-10-08 07:48:27.231255511 +0000 UTC m=+4450.360948112" observedRunningTime="2025-10-08 07:48:27.859490323 +0000 UTC m=+4450.989182934" watchObservedRunningTime="2025-10-08 07:48:27.862073333 +0000 UTC m=+4450.991765944" Oct 08 07:48:33 crc kubenswrapper[4958]: I1008 07:48:33.806513 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:33 crc kubenswrapper[4958]: I1008 07:48:33.807393 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:33 crc kubenswrapper[4958]: I1008 07:48:33.885721 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:33 crc kubenswrapper[4958]: I1008 07:48:33.961820 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:34 crc kubenswrapper[4958]: I1008 07:48:34.129399 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl2jp"] Oct 08 07:48:35 crc kubenswrapper[4958]: I1008 07:48:35.903432 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bl2jp" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="registry-server" containerID="cri-o://fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e" gracePeriod=2 Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.437281 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.589363 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-utilities\") pod \"3fa6f175-384f-4a41-8818-ef5ab80d6286\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.589462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-catalog-content\") pod \"3fa6f175-384f-4a41-8818-ef5ab80d6286\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.589752 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s796z\" (UniqueName: \"kubernetes.io/projected/3fa6f175-384f-4a41-8818-ef5ab80d6286-kube-api-access-s796z\") pod \"3fa6f175-384f-4a41-8818-ef5ab80d6286\" (UID: \"3fa6f175-384f-4a41-8818-ef5ab80d6286\") " Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.592093 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-utilities" (OuterVolumeSpecName: "utilities") pod "3fa6f175-384f-4a41-8818-ef5ab80d6286" (UID: "3fa6f175-384f-4a41-8818-ef5ab80d6286"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.599223 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa6f175-384f-4a41-8818-ef5ab80d6286-kube-api-access-s796z" (OuterVolumeSpecName: "kube-api-access-s796z") pod "3fa6f175-384f-4a41-8818-ef5ab80d6286" (UID: "3fa6f175-384f-4a41-8818-ef5ab80d6286"). InnerVolumeSpecName "kube-api-access-s796z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.692213 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s796z\" (UniqueName: \"kubernetes.io/projected/3fa6f175-384f-4a41-8818-ef5ab80d6286-kube-api-access-s796z\") on node \"crc\" DevicePath \"\"" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.692279 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.919359 4958 generic.go:334] "Generic (PLEG): container finished" podID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerID="fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e" exitCode=0 Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.919425 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl2jp" event={"ID":"3fa6f175-384f-4a41-8818-ef5ab80d6286","Type":"ContainerDied","Data":"fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e"} Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.919472 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl2jp" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.919502 4958 scope.go:117] "RemoveContainer" containerID="fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.919482 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl2jp" event={"ID":"3fa6f175-384f-4a41-8818-ef5ab80d6286","Type":"ContainerDied","Data":"0ea5a36db254c289aacb208a289ad235448e85b2e632148c64a3915f0ed0223f"} Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.950598 4958 scope.go:117] "RemoveContainer" containerID="8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf" Oct 08 07:48:36 crc kubenswrapper[4958]: I1008 07:48:36.979399 4958 scope.go:117] "RemoveContainer" containerID="49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1" Oct 08 07:48:37 crc kubenswrapper[4958]: I1008 07:48:37.022558 4958 scope.go:117] "RemoveContainer" containerID="fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e" Oct 08 07:48:37 crc kubenswrapper[4958]: E1008 07:48:37.023282 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e\": container with ID starting with fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e not found: ID does not exist" containerID="fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e" Oct 08 07:48:37 crc kubenswrapper[4958]: I1008 07:48:37.023352 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e"} err="failed to get container status \"fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e\": rpc error: code = NotFound desc = could not find container \"fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e\": container with ID starting with fc6fd5189253d3483b2e548e079fadfa0d6321c074bad450fef677a70c1c5d4e not found: ID does not exist" Oct 08 07:48:37 crc kubenswrapper[4958]: I1008 07:48:37.023399 4958 scope.go:117] "RemoveContainer" containerID="8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf" Oct 08 07:48:37 crc kubenswrapper[4958]: E1008 07:48:37.024318 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf\": container with ID starting with 8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf not found: ID does not exist" containerID="8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf" Oct 08 07:48:37 crc kubenswrapper[4958]: I1008 07:48:37.024413 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf"} err="failed to get container status \"8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf\": rpc error: code = NotFound desc = could not find container \"8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf\": container with ID starting with 8e2621390c2a2d7069117ab88847a44d59d0a0d0b9a800d2fa049f11896da2cf not found: ID does not exist" Oct 08 07:48:37 crc kubenswrapper[4958]: I1008 07:48:37.024488 4958 scope.go:117] "RemoveContainer" containerID="49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1" Oct 08 07:48:37 crc kubenswrapper[4958]: E1008 07:48:37.025041 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1\": container with ID starting with 49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1 not found: ID does not exist" containerID="49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1" Oct 08 07:48:37 crc kubenswrapper[4958]: I1008 07:48:37.025111 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1"} err="failed to get container status \"49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1\": rpc error: code = NotFound desc = could not find container \"49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1\": container with ID starting with 49426fb8af68b7283056c29443fc2aef7c8bb4cb2e6625a3763efda5323658f1 not found: ID does not exist" Oct 08 07:48:38 crc kubenswrapper[4958]: I1008 07:48:38.027873 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fa6f175-384f-4a41-8818-ef5ab80d6286" (UID: "3fa6f175-384f-4a41-8818-ef5ab80d6286"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:48:38 crc kubenswrapper[4958]: I1008 07:48:38.114225 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa6f175-384f-4a41-8818-ef5ab80d6286-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:48:38 crc kubenswrapper[4958]: I1008 07:48:38.160256 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl2jp"] Oct 08 07:48:38 crc kubenswrapper[4958]: I1008 07:48:38.169606 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bl2jp"] Oct 08 07:48:39 crc kubenswrapper[4958]: I1008 07:48:39.588740 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" path="/var/lib/kubelet/pods/3fa6f175-384f-4a41-8818-ef5ab80d6286/volumes" Oct 08 07:49:36 crc kubenswrapper[4958]: I1008 07:49:36.844833 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:49:36 crc kubenswrapper[4958]: I1008 07:49:36.845603 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:50:06 crc kubenswrapper[4958]: I1008 07:50:06.845444 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:50:06 crc kubenswrapper[4958]: I1008 07:50:06.846585 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:50:36 crc kubenswrapper[4958]: I1008 07:50:36.845218 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:50:36 crc kubenswrapper[4958]: I1008 07:50:36.845890 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:50:36 crc kubenswrapper[4958]: I1008 07:50:36.845993 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:50:36 crc kubenswrapper[4958]: I1008 07:50:36.846783 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:50:36 crc kubenswrapper[4958]: I1008 07:50:36.846876 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" gracePeriod=600 Oct 08 07:50:36 crc kubenswrapper[4958]: E1008 07:50:36.981088 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:50:37 crc kubenswrapper[4958]: I1008 07:50:37.100273 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" exitCode=0 Oct 08 07:50:37 crc kubenswrapper[4958]: I1008 07:50:37.100339 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d"} Oct 08 07:50:37 crc kubenswrapper[4958]: I1008 07:50:37.100397 4958 scope.go:117] "RemoveContainer" containerID="23c15f33914bd182ac7e9fb57f2b0d3cd41960e52d2608529d546bb75834d06c" Oct 08 07:50:37 crc kubenswrapper[4958]: I1008 07:50:37.101771 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:50:37 crc kubenswrapper[4958]: E1008 07:50:37.102214 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:50:50 crc kubenswrapper[4958]: I1008 07:50:50.576557 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:50:50 crc kubenswrapper[4958]: E1008 07:50:50.577586 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:51:05 crc kubenswrapper[4958]: I1008 07:51:05.576510 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:51:05 crc kubenswrapper[4958]: E1008 07:51:05.577858 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.513069 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7q6g"] Oct 08 07:51:07 crc kubenswrapper[4958]: E1008 07:51:07.513612 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="extract-utilities" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.513642 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="extract-utilities" Oct 08 07:51:07 crc kubenswrapper[4958]: E1008 07:51:07.513688 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="extract-content" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.513705 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="extract-content" Oct 08 07:51:07 crc kubenswrapper[4958]: E1008 07:51:07.513731 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="registry-server" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.513774 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="registry-server" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.515493 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa6f175-384f-4a41-8818-ef5ab80d6286" containerName="registry-server" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.518172 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.525917 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7q6g"] Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.625135 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-utilities\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.625291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-catalog-content\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.625373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpx8z\" (UniqueName: \"kubernetes.io/projected/172ae105-f13c-4da5-aab4-bdd753e5c253-kube-api-access-mpx8z\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.727002 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-catalog-content\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.727082 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpx8z\" (UniqueName: \"kubernetes.io/projected/172ae105-f13c-4da5-aab4-bdd753e5c253-kube-api-access-mpx8z\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.727144 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-utilities\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.727502 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-catalog-content\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.727595 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-utilities\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.751396 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpx8z\" (UniqueName: \"kubernetes.io/projected/172ae105-f13c-4da5-aab4-bdd753e5c253-kube-api-access-mpx8z\") pod \"redhat-marketplace-k7q6g\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:07 crc kubenswrapper[4958]: I1008 07:51:07.846526 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:08 crc kubenswrapper[4958]: I1008 07:51:08.076988 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7q6g"] Oct 08 07:51:08 crc kubenswrapper[4958]: I1008 07:51:08.446386 4958 generic.go:334] "Generic (PLEG): container finished" podID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerID="e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770" exitCode=0 Oct 08 07:51:08 crc kubenswrapper[4958]: I1008 07:51:08.446831 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7q6g" event={"ID":"172ae105-f13c-4da5-aab4-bdd753e5c253","Type":"ContainerDied","Data":"e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770"} Oct 08 07:51:08 crc kubenswrapper[4958]: I1008 07:51:08.447133 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7q6g" event={"ID":"172ae105-f13c-4da5-aab4-bdd753e5c253","Type":"ContainerStarted","Data":"6a24940d5c67d06bda33a228cc8cd8f4b7752806bd427bc98ace675e380c2866"} Oct 08 07:51:10 crc kubenswrapper[4958]: I1008 07:51:10.478789 4958 generic.go:334] "Generic (PLEG): container finished" podID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerID="c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c" exitCode=0 Oct 08 07:51:10 crc kubenswrapper[4958]: I1008 07:51:10.478895 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7q6g" event={"ID":"172ae105-f13c-4da5-aab4-bdd753e5c253","Type":"ContainerDied","Data":"c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c"} Oct 08 07:51:11 crc kubenswrapper[4958]: I1008 07:51:11.499306 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7q6g" event={"ID":"172ae105-f13c-4da5-aab4-bdd753e5c253","Type":"ContainerStarted","Data":"ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051"} Oct 08 07:51:11 crc kubenswrapper[4958]: I1008 07:51:11.523936 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7q6g" podStartSLOduration=2.074124891 podStartE2EDuration="4.523907664s" podCreationTimestamp="2025-10-08 07:51:07 +0000 UTC" firstStartedPulling="2025-10-08 07:51:08.449403415 +0000 UTC m=+4611.579096056" lastFinishedPulling="2025-10-08 07:51:10.899186188 +0000 UTC m=+4614.028878829" observedRunningTime="2025-10-08 07:51:11.516404841 +0000 UTC m=+4614.646097482" watchObservedRunningTime="2025-10-08 07:51:11.523907664 +0000 UTC m=+4614.653600295" Oct 08 07:51:17 crc kubenswrapper[4958]: I1008 07:51:17.582345 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:51:17 crc kubenswrapper[4958]: E1008 07:51:17.583353 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:51:17 crc kubenswrapper[4958]: I1008 07:51:17.846902 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:17 crc kubenswrapper[4958]: I1008 07:51:17.847010 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:17 crc kubenswrapper[4958]: I1008 07:51:17.905832 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:18 crc kubenswrapper[4958]: I1008 07:51:18.630131 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:18 crc kubenswrapper[4958]: I1008 07:51:18.698672 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7q6g"] Oct 08 07:51:20 crc kubenswrapper[4958]: I1008 07:51:20.599769 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7q6g" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="registry-server" containerID="cri-o://ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051" gracePeriod=2 Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.065096 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.142665 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-catalog-content\") pod \"172ae105-f13c-4da5-aab4-bdd753e5c253\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.142705 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpx8z\" (UniqueName: \"kubernetes.io/projected/172ae105-f13c-4da5-aab4-bdd753e5c253-kube-api-access-mpx8z\") pod \"172ae105-f13c-4da5-aab4-bdd753e5c253\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.142769 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-utilities\") pod \"172ae105-f13c-4da5-aab4-bdd753e5c253\" (UID: \"172ae105-f13c-4da5-aab4-bdd753e5c253\") " Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.143963 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-utilities" (OuterVolumeSpecName: "utilities") pod "172ae105-f13c-4da5-aab4-bdd753e5c253" (UID: "172ae105-f13c-4da5-aab4-bdd753e5c253"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.152231 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172ae105-f13c-4da5-aab4-bdd753e5c253-kube-api-access-mpx8z" (OuterVolumeSpecName: "kube-api-access-mpx8z") pod "172ae105-f13c-4da5-aab4-bdd753e5c253" (UID: "172ae105-f13c-4da5-aab4-bdd753e5c253"). InnerVolumeSpecName "kube-api-access-mpx8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.161069 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "172ae105-f13c-4da5-aab4-bdd753e5c253" (UID: "172ae105-f13c-4da5-aab4-bdd753e5c253"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.244154 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.244223 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpx8z\" (UniqueName: \"kubernetes.io/projected/172ae105-f13c-4da5-aab4-bdd753e5c253-kube-api-access-mpx8z\") on node \"crc\" DevicePath \"\"" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.244244 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172ae105-f13c-4da5-aab4-bdd753e5c253-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.608778 4958 generic.go:334] "Generic (PLEG): container finished" podID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerID="ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051" exitCode=0 Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.608822 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7q6g" event={"ID":"172ae105-f13c-4da5-aab4-bdd753e5c253","Type":"ContainerDied","Data":"ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051"} Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.609868 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7q6g" event={"ID":"172ae105-f13c-4da5-aab4-bdd753e5c253","Type":"ContainerDied","Data":"6a24940d5c67d06bda33a228cc8cd8f4b7752806bd427bc98ace675e380c2866"} Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.608891 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7q6g" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.609895 4958 scope.go:117] "RemoveContainer" containerID="ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.628098 4958 scope.go:117] "RemoveContainer" containerID="c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.646979 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7q6g"] Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.647231 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7q6g"] Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.650456 4958 scope.go:117] "RemoveContainer" containerID="e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.669165 4958 scope.go:117] "RemoveContainer" containerID="ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051" Oct 08 07:51:21 crc kubenswrapper[4958]: E1008 07:51:21.669596 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051\": container with ID starting with ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051 not found: ID does not exist" containerID="ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.669690 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051"} err="failed to get container status \"ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051\": rpc error: code = NotFound desc = could not find container \"ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051\": container with ID starting with ce4ab0e42887737861ff66c47d0622a9adfd85a87a4675e6a49183936d3e2051 not found: ID does not exist" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.669810 4958 scope.go:117] "RemoveContainer" containerID="c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c" Oct 08 07:51:21 crc kubenswrapper[4958]: E1008 07:51:21.670202 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c\": container with ID starting with c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c not found: ID does not exist" containerID="c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.670243 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c"} err="failed to get container status \"c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c\": rpc error: code = NotFound desc = could not find container \"c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c\": container with ID starting with c0503ee308620860c1efac6af5ec5eb29629385f3bbc431083c605ba718d7e0c not found: ID does not exist" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.670257 4958 scope.go:117] "RemoveContainer" containerID="e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770" Oct 08 07:51:21 crc kubenswrapper[4958]: E1008 07:51:21.670491 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770\": container with ID starting with e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770 not found: ID does not exist" containerID="e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770" Oct 08 07:51:21 crc kubenswrapper[4958]: I1008 07:51:21.670533 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770"} err="failed to get container status \"e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770\": rpc error: code = NotFound desc = could not find container \"e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770\": container with ID starting with e972105f3395b9617a7b760fb7ecfa7d257e458eea549293689da2c517250770 not found: ID does not exist" Oct 08 07:51:23 crc kubenswrapper[4958]: I1008 07:51:23.596273 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" path="/var/lib/kubelet/pods/172ae105-f13c-4da5-aab4-bdd753e5c253/volumes" Oct 08 07:51:29 crc kubenswrapper[4958]: I1008 07:51:29.577769 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:51:29 crc kubenswrapper[4958]: E1008 07:51:29.578728 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:51:42 crc kubenswrapper[4958]: I1008 07:51:42.577303 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:51:42 crc kubenswrapper[4958]: E1008 07:51:42.578414 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:51:56 crc kubenswrapper[4958]: I1008 07:51:56.578251 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:51:56 crc kubenswrapper[4958]: E1008 07:51:56.579347 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:52:11 crc kubenswrapper[4958]: I1008 07:52:11.576221 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:52:11 crc kubenswrapper[4958]: E1008 07:52:11.577107 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:52:22 crc kubenswrapper[4958]: I1008 07:52:22.577038 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:52:22 crc kubenswrapper[4958]: E1008 07:52:22.578002 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:52:35 crc kubenswrapper[4958]: I1008 07:52:35.576845 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:52:35 crc kubenswrapper[4958]: E1008 07:52:35.577831 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.844098 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ppwfd"] Oct 08 07:52:38 crc kubenswrapper[4958]: E1008 07:52:38.844855 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="registry-server" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.844876 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="registry-server" Oct 08 07:52:38 crc kubenswrapper[4958]: E1008 07:52:38.844910 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="extract-content" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.844922 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="extract-content" Oct 08 07:52:38 crc kubenswrapper[4958]: E1008 07:52:38.844979 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="extract-utilities" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.844993 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="extract-utilities" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.845305 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="172ae105-f13c-4da5-aab4-bdd753e5c253" containerName="registry-server" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.847066 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.863836 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ppwfd"] Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.960529 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-utilities\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.960593 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjhx\" (UniqueName: \"kubernetes.io/projected/e115c496-bb50-42f6-900b-003401be41d6-kube-api-access-rxjhx\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:38 crc kubenswrapper[4958]: I1008 07:52:38.960630 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-catalog-content\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.062807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-utilities\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.062903 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjhx\" (UniqueName: \"kubernetes.io/projected/e115c496-bb50-42f6-900b-003401be41d6-kube-api-access-rxjhx\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.062994 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-catalog-content\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.063938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-utilities\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.064241 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-catalog-content\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.091159 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjhx\" (UniqueName: \"kubernetes.io/projected/e115c496-bb50-42f6-900b-003401be41d6-kube-api-access-rxjhx\") pod \"certified-operators-ppwfd\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.180184 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:39 crc kubenswrapper[4958]: I1008 07:52:39.650659 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ppwfd"] Oct 08 07:52:40 crc kubenswrapper[4958]: I1008 07:52:40.371764 4958 generic.go:334] "Generic (PLEG): container finished" podID="e115c496-bb50-42f6-900b-003401be41d6" containerID="d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0" exitCode=0 Oct 08 07:52:40 crc kubenswrapper[4958]: I1008 07:52:40.371861 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerDied","Data":"d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0"} Oct 08 07:52:40 crc kubenswrapper[4958]: I1008 07:52:40.372025 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerStarted","Data":"60f9864917e8c5b9ddad01e2584c869b5f9ca6350e100a9b028ea9c1c71ce197"} Oct 08 07:52:41 crc kubenswrapper[4958]: I1008 07:52:41.384200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerStarted","Data":"f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183"} Oct 08 07:52:42 crc kubenswrapper[4958]: I1008 07:52:42.407370 4958 generic.go:334] "Generic (PLEG): container finished" podID="e115c496-bb50-42f6-900b-003401be41d6" containerID="f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183" exitCode=0 Oct 08 07:52:42 crc kubenswrapper[4958]: I1008 07:52:42.407433 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerDied","Data":"f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183"} Oct 08 07:52:43 crc kubenswrapper[4958]: I1008 07:52:43.418201 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerStarted","Data":"75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89"} Oct 08 07:52:43 crc kubenswrapper[4958]: I1008 07:52:43.444577 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ppwfd" podStartSLOduration=2.983124451 podStartE2EDuration="5.444556609s" podCreationTimestamp="2025-10-08 07:52:38 +0000 UTC" firstStartedPulling="2025-10-08 07:52:40.373281408 +0000 UTC m=+4703.502974009" lastFinishedPulling="2025-10-08 07:52:42.834713536 +0000 UTC m=+4705.964406167" observedRunningTime="2025-10-08 07:52:43.44202079 +0000 UTC m=+4706.571713441" watchObservedRunningTime="2025-10-08 07:52:43.444556609 +0000 UTC m=+4706.574249230" Oct 08 07:52:49 crc kubenswrapper[4958]: I1008 07:52:49.181458 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:49 crc kubenswrapper[4958]: I1008 07:52:49.182302 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:49 crc kubenswrapper[4958]: I1008 07:52:49.258624 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:49 crc kubenswrapper[4958]: I1008 07:52:49.550144 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:49 crc kubenswrapper[4958]: I1008 07:52:49.616908 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ppwfd"] Oct 08 07:52:50 crc kubenswrapper[4958]: I1008 07:52:50.576762 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:52:50 crc kubenswrapper[4958]: E1008 07:52:50.577060 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:52:51 crc kubenswrapper[4958]: I1008 07:52:51.491719 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ppwfd" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="registry-server" containerID="cri-o://75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89" gracePeriod=2 Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.411029 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.500799 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-catalog-content\") pod \"e115c496-bb50-42f6-900b-003401be41d6\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.500871 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxjhx\" (UniqueName: \"kubernetes.io/projected/e115c496-bb50-42f6-900b-003401be41d6-kube-api-access-rxjhx\") pod \"e115c496-bb50-42f6-900b-003401be41d6\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.501000 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-utilities\") pod \"e115c496-bb50-42f6-900b-003401be41d6\" (UID: \"e115c496-bb50-42f6-900b-003401be41d6\") " Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.502673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-utilities" (OuterVolumeSpecName: "utilities") pod "e115c496-bb50-42f6-900b-003401be41d6" (UID: "e115c496-bb50-42f6-900b-003401be41d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.504547 4958 generic.go:334] "Generic (PLEG): container finished" podID="e115c496-bb50-42f6-900b-003401be41d6" containerID="75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89" exitCode=0 Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.504600 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerDied","Data":"75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89"} Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.504639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppwfd" event={"ID":"e115c496-bb50-42f6-900b-003401be41d6","Type":"ContainerDied","Data":"60f9864917e8c5b9ddad01e2584c869b5f9ca6350e100a9b028ea9c1c71ce197"} Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.504654 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppwfd" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.504668 4958 scope.go:117] "RemoveContainer" containerID="75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.514262 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e115c496-bb50-42f6-900b-003401be41d6-kube-api-access-rxjhx" (OuterVolumeSpecName: "kube-api-access-rxjhx") pod "e115c496-bb50-42f6-900b-003401be41d6" (UID: "e115c496-bb50-42f6-900b-003401be41d6"). InnerVolumeSpecName "kube-api-access-rxjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.571168 4958 scope.go:117] "RemoveContainer" containerID="f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.584792 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e115c496-bb50-42f6-900b-003401be41d6" (UID: "e115c496-bb50-42f6-900b-003401be41d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.597019 4958 scope.go:117] "RemoveContainer" containerID="d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.603424 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.603470 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxjhx\" (UniqueName: \"kubernetes.io/projected/e115c496-bb50-42f6-900b-003401be41d6-kube-api-access-rxjhx\") on node \"crc\" DevicePath \"\"" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.603485 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e115c496-bb50-42f6-900b-003401be41d6-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.631429 4958 scope.go:117] "RemoveContainer" containerID="75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89" Oct 08 07:52:52 crc kubenswrapper[4958]: E1008 07:52:52.632405 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89\": container with ID starting with 75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89 not found: ID does not exist" containerID="75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.632439 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89"} err="failed to get container status \"75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89\": rpc error: code = NotFound desc = could not find container \"75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89\": container with ID starting with 75fb0432209b8ae432f8e04c9434b581a91335b9e519fd41f9431e56791c8b89 not found: ID does not exist" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.632485 4958 scope.go:117] "RemoveContainer" containerID="f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183" Oct 08 07:52:52 crc kubenswrapper[4958]: E1008 07:52:52.633133 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183\": container with ID starting with f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183 not found: ID does not exist" containerID="f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.633213 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183"} err="failed to get container status \"f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183\": rpc error: code = NotFound desc = could not find container \"f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183\": container with ID starting with f26a6e50e77c0b68f94e94f829f2a73d3d741e1cfa357a3c531aa03f4c552183 not found: ID does not exist" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.633265 4958 scope.go:117] "RemoveContainer" containerID="d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0" Oct 08 07:52:52 crc kubenswrapper[4958]: E1008 07:52:52.633728 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0\": container with ID starting with d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0 not found: ID does not exist" containerID="d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.633779 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0"} err="failed to get container status \"d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0\": rpc error: code = NotFound desc = could not find container \"d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0\": container with ID starting with d4be6c32afebc6a501ae630b07cb320c1e5b2d492b4bab3ec18505ba436723e0 not found: ID does not exist" Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.863214 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ppwfd"] Oct 08 07:52:52 crc kubenswrapper[4958]: I1008 07:52:52.873902 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ppwfd"] Oct 08 07:52:53 crc kubenswrapper[4958]: I1008 07:52:53.594237 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e115c496-bb50-42f6-900b-003401be41d6" path="/var/lib/kubelet/pods/e115c496-bb50-42f6-900b-003401be41d6/volumes" Oct 08 07:53:01 crc kubenswrapper[4958]: I1008 07:53:01.577085 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:53:01 crc kubenswrapper[4958]: E1008 07:53:01.577699 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:53:15 crc kubenswrapper[4958]: I1008 07:53:15.578004 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:53:15 crc kubenswrapper[4958]: E1008 07:53:15.579256 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:53:29 crc kubenswrapper[4958]: I1008 07:53:29.576503 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:53:29 crc kubenswrapper[4958]: E1008 07:53:29.577603 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:53:41 crc kubenswrapper[4958]: I1008 07:53:41.577441 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:53:41 crc kubenswrapper[4958]: E1008 07:53:41.578440 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:53:56 crc kubenswrapper[4958]: I1008 07:53:56.577592 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:53:56 crc kubenswrapper[4958]: E1008 07:53:56.578580 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:54:09 crc kubenswrapper[4958]: I1008 07:54:09.577452 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:54:09 crc kubenswrapper[4958]: E1008 07:54:09.578394 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:54:21 crc kubenswrapper[4958]: I1008 07:54:21.576136 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:54:21 crc kubenswrapper[4958]: E1008 07:54:21.576780 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:54:35 crc kubenswrapper[4958]: I1008 07:54:35.591774 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:54:35 crc kubenswrapper[4958]: E1008 07:54:35.595565 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:54:49 crc kubenswrapper[4958]: I1008 07:54:49.577113 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:54:49 crc kubenswrapper[4958]: E1008 07:54:49.578133 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.546334 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-hctrv"] Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.558635 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-hctrv"] Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.593676 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd93696c-c727-4b22-8426-cc909f7f64e5" path="/var/lib/kubelet/pods/bd93696c-c727-4b22-8426-cc909f7f64e5/volumes" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.683831 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-vltn8"] Oct 08 07:54:55 crc kubenswrapper[4958]: E1008 07:54:55.684325 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="registry-server" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.684358 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="registry-server" Oct 08 07:54:55 crc kubenswrapper[4958]: E1008 07:54:55.684386 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="extract-utilities" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.684399 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="extract-utilities" Oct 08 07:54:55 crc kubenswrapper[4958]: E1008 07:54:55.684439 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="extract-content" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.684451 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="extract-content" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.684717 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e115c496-bb50-42f6-900b-003401be41d6" containerName="registry-server" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.685538 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.688250 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.688736 4958 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vlvfh" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.690244 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.690583 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.710019 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-vltn8"] Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.862302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83eacc79-d3e2-410c-b430-edde049c13a2-node-mnt\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.862365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmpb8\" (UniqueName: \"kubernetes.io/projected/83eacc79-d3e2-410c-b430-edde049c13a2-kube-api-access-fmpb8\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.862599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83eacc79-d3e2-410c-b430-edde049c13a2-crc-storage\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.964166 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83eacc79-d3e2-410c-b430-edde049c13a2-crc-storage\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.964248 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83eacc79-d3e2-410c-b430-edde049c13a2-node-mnt\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.964284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmpb8\" (UniqueName: \"kubernetes.io/projected/83eacc79-d3e2-410c-b430-edde049c13a2-kube-api-access-fmpb8\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.964674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83eacc79-d3e2-410c-b430-edde049c13a2-node-mnt\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.965520 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83eacc79-d3e2-410c-b430-edde049c13a2-crc-storage\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:55 crc kubenswrapper[4958]: I1008 07:54:55.997705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmpb8\" (UniqueName: \"kubernetes.io/projected/83eacc79-d3e2-410c-b430-edde049c13a2-kube-api-access-fmpb8\") pod \"crc-storage-crc-vltn8\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:56 crc kubenswrapper[4958]: I1008 07:54:56.021641 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:56 crc kubenswrapper[4958]: I1008 07:54:56.576339 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-vltn8"] Oct 08 07:54:56 crc kubenswrapper[4958]: I1008 07:54:56.586438 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 07:54:56 crc kubenswrapper[4958]: I1008 07:54:56.696092 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-vltn8" event={"ID":"83eacc79-d3e2-410c-b430-edde049c13a2","Type":"ContainerStarted","Data":"a6827db9bf4c1195355dd271ff08158f106b1eab62c59ae8906104d4a5f539e4"} Oct 08 07:54:57 crc kubenswrapper[4958]: I1008 07:54:57.707359 4958 generic.go:334] "Generic (PLEG): container finished" podID="83eacc79-d3e2-410c-b430-edde049c13a2" containerID="e42552095dcd292396d4e886243b22f8e7328f6699b9f9d30ac9c6dfa69e1b90" exitCode=0 Oct 08 07:54:57 crc kubenswrapper[4958]: I1008 07:54:57.707471 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-vltn8" event={"ID":"83eacc79-d3e2-410c-b430-edde049c13a2","Type":"ContainerDied","Data":"e42552095dcd292396d4e886243b22f8e7328f6699b9f9d30ac9c6dfa69e1b90"} Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.104709 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.231120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmpb8\" (UniqueName: \"kubernetes.io/projected/83eacc79-d3e2-410c-b430-edde049c13a2-kube-api-access-fmpb8\") pod \"83eacc79-d3e2-410c-b430-edde049c13a2\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.231323 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83eacc79-d3e2-410c-b430-edde049c13a2-crc-storage\") pod \"83eacc79-d3e2-410c-b430-edde049c13a2\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.231413 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83eacc79-d3e2-410c-b430-edde049c13a2-node-mnt\") pod \"83eacc79-d3e2-410c-b430-edde049c13a2\" (UID: \"83eacc79-d3e2-410c-b430-edde049c13a2\") " Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.231673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83eacc79-d3e2-410c-b430-edde049c13a2-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "83eacc79-d3e2-410c-b430-edde049c13a2" (UID: "83eacc79-d3e2-410c-b430-edde049c13a2"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.231931 4958 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/83eacc79-d3e2-410c-b430-edde049c13a2-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.240621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83eacc79-d3e2-410c-b430-edde049c13a2-kube-api-access-fmpb8" (OuterVolumeSpecName: "kube-api-access-fmpb8") pod "83eacc79-d3e2-410c-b430-edde049c13a2" (UID: "83eacc79-d3e2-410c-b430-edde049c13a2"). InnerVolumeSpecName "kube-api-access-fmpb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.265079 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83eacc79-d3e2-410c-b430-edde049c13a2-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "83eacc79-d3e2-410c-b430-edde049c13a2" (UID: "83eacc79-d3e2-410c-b430-edde049c13a2"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.333682 4958 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/83eacc79-d3e2-410c-b430-edde049c13a2-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.333767 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmpb8\" (UniqueName: \"kubernetes.io/projected/83eacc79-d3e2-410c-b430-edde049c13a2-kube-api-access-fmpb8\") on node \"crc\" DevicePath \"\"" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.730091 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-vltn8" event={"ID":"83eacc79-d3e2-410c-b430-edde049c13a2","Type":"ContainerDied","Data":"a6827db9bf4c1195355dd271ff08158f106b1eab62c59ae8906104d4a5f539e4"} Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.730129 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-vltn8" Oct 08 07:54:59 crc kubenswrapper[4958]: I1008 07:54:59.730141 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6827db9bf4c1195355dd271ff08158f106b1eab62c59ae8906104d4a5f539e4" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.479448 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-vltn8"] Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.489857 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-vltn8"] Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.592684 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83eacc79-d3e2-410c-b430-edde049c13a2" path="/var/lib/kubelet/pods/83eacc79-d3e2-410c-b430-edde049c13a2/volumes" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.704718 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-d68gh"] Oct 08 07:55:01 crc kubenswrapper[4958]: E1008 07:55:01.705159 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83eacc79-d3e2-410c-b430-edde049c13a2" containerName="storage" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.705175 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83eacc79-d3e2-410c-b430-edde049c13a2" containerName="storage" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.705514 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="83eacc79-d3e2-410c-b430-edde049c13a2" containerName="storage" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.706251 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.713046 4958 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vlvfh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.713338 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.713754 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.714274 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.717745 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-d68gh"] Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.874818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ea073f46-f176-4bf4-b425-0e1397a7f302-crc-storage\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.874925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgqd9\" (UniqueName: \"kubernetes.io/projected/ea073f46-f176-4bf4-b425-0e1397a7f302-kube-api-access-lgqd9\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.875344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ea073f46-f176-4bf4-b425-0e1397a7f302-node-mnt\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.977314 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ea073f46-f176-4bf4-b425-0e1397a7f302-node-mnt\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.977883 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ea073f46-f176-4bf4-b425-0e1397a7f302-node-mnt\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.978215 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ea073f46-f176-4bf4-b425-0e1397a7f302-crc-storage\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.978660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgqd9\" (UniqueName: \"kubernetes.io/projected/ea073f46-f176-4bf4-b425-0e1397a7f302-kube-api-access-lgqd9\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:01 crc kubenswrapper[4958]: I1008 07:55:01.979860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ea073f46-f176-4bf4-b425-0e1397a7f302-crc-storage\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:02 crc kubenswrapper[4958]: I1008 07:55:02.015211 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgqd9\" (UniqueName: \"kubernetes.io/projected/ea073f46-f176-4bf4-b425-0e1397a7f302-kube-api-access-lgqd9\") pod \"crc-storage-crc-d68gh\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:02 crc kubenswrapper[4958]: I1008 07:55:02.037612 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:02 crc kubenswrapper[4958]: I1008 07:55:02.327104 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-d68gh"] Oct 08 07:55:02 crc kubenswrapper[4958]: W1008 07:55:02.337728 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea073f46_f176_4bf4_b425_0e1397a7f302.slice/crio-f84db862e49301e9660cc87a7cfc0f03e365d1c694127c796170efbdbd90ebc0 WatchSource:0}: Error finding container f84db862e49301e9660cc87a7cfc0f03e365d1c694127c796170efbdbd90ebc0: Status 404 returned error can't find the container with id f84db862e49301e9660cc87a7cfc0f03e365d1c694127c796170efbdbd90ebc0 Oct 08 07:55:02 crc kubenswrapper[4958]: I1008 07:55:02.758102 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-d68gh" event={"ID":"ea073f46-f176-4bf4-b425-0e1397a7f302","Type":"ContainerStarted","Data":"f84db862e49301e9660cc87a7cfc0f03e365d1c694127c796170efbdbd90ebc0"} Oct 08 07:55:03 crc kubenswrapper[4958]: I1008 07:55:03.576723 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:55:03 crc kubenswrapper[4958]: E1008 07:55:03.577738 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:55:03 crc kubenswrapper[4958]: I1008 07:55:03.770713 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea073f46-f176-4bf4-b425-0e1397a7f302" containerID="f8da352c30ad5b7e5545c52ab094da7120a758c6887eb0668fdad9d80a430cd5" exitCode=0 Oct 08 07:55:03 crc kubenswrapper[4958]: I1008 07:55:03.770781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-d68gh" event={"ID":"ea073f46-f176-4bf4-b425-0e1397a7f302","Type":"ContainerDied","Data":"f8da352c30ad5b7e5545c52ab094da7120a758c6887eb0668fdad9d80a430cd5"} Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.245645 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.436074 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgqd9\" (UniqueName: \"kubernetes.io/projected/ea073f46-f176-4bf4-b425-0e1397a7f302-kube-api-access-lgqd9\") pod \"ea073f46-f176-4bf4-b425-0e1397a7f302\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.436151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ea073f46-f176-4bf4-b425-0e1397a7f302-node-mnt\") pod \"ea073f46-f176-4bf4-b425-0e1397a7f302\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.436334 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ea073f46-f176-4bf4-b425-0e1397a7f302-crc-storage\") pod \"ea073f46-f176-4bf4-b425-0e1397a7f302\" (UID: \"ea073f46-f176-4bf4-b425-0e1397a7f302\") " Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.436485 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea073f46-f176-4bf4-b425-0e1397a7f302-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "ea073f46-f176-4bf4-b425-0e1397a7f302" (UID: "ea073f46-f176-4bf4-b425-0e1397a7f302"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.437375 4958 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ea073f46-f176-4bf4-b425-0e1397a7f302-node-mnt\") on node \"crc\" DevicePath \"\"" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.447182 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea073f46-f176-4bf4-b425-0e1397a7f302-kube-api-access-lgqd9" (OuterVolumeSpecName: "kube-api-access-lgqd9") pod "ea073f46-f176-4bf4-b425-0e1397a7f302" (UID: "ea073f46-f176-4bf4-b425-0e1397a7f302"). InnerVolumeSpecName "kube-api-access-lgqd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.469536 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea073f46-f176-4bf4-b425-0e1397a7f302-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "ea073f46-f176-4bf4-b425-0e1397a7f302" (UID: "ea073f46-f176-4bf4-b425-0e1397a7f302"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.537930 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgqd9\" (UniqueName: \"kubernetes.io/projected/ea073f46-f176-4bf4-b425-0e1397a7f302-kube-api-access-lgqd9\") on node \"crc\" DevicePath \"\"" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.537984 4958 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ea073f46-f176-4bf4-b425-0e1397a7f302-crc-storage\") on node \"crc\" DevicePath \"\"" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.790531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-d68gh" event={"ID":"ea073f46-f176-4bf4-b425-0e1397a7f302","Type":"ContainerDied","Data":"f84db862e49301e9660cc87a7cfc0f03e365d1c694127c796170efbdbd90ebc0"} Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.790594 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84db862e49301e9660cc87a7cfc0f03e365d1c694127c796170efbdbd90ebc0" Oct 08 07:55:05 crc kubenswrapper[4958]: I1008 07:55:05.790650 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-d68gh" Oct 08 07:55:18 crc kubenswrapper[4958]: I1008 07:55:18.578002 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:55:18 crc kubenswrapper[4958]: E1008 07:55:18.578876 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:55:30 crc kubenswrapper[4958]: I1008 07:55:30.576994 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:55:30 crc kubenswrapper[4958]: E1008 07:55:30.577851 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 07:55:37 crc kubenswrapper[4958]: I1008 07:55:37.824887 4958 scope.go:117] "RemoveContainer" containerID="c7667f49a6afbad24024a04e97f9b5e09300f61798ae1191ec05d430b7a84e5c" Oct 08 07:55:44 crc kubenswrapper[4958]: I1008 07:55:44.577340 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:55:45 crc kubenswrapper[4958]: I1008 07:55:45.171287 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"986a687436ce24cd14a49af6d4c159b01d6d8dafeb34a2b2beac3221184d187e"} Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.413457 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9qddw"] Oct 08 07:56:57 crc kubenswrapper[4958]: E1008 07:56:57.414917 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea073f46-f176-4bf4-b425-0e1397a7f302" containerName="storage" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.414984 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea073f46-f176-4bf4-b425-0e1397a7f302" containerName="storage" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.415366 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea073f46-f176-4bf4-b425-0e1397a7f302" containerName="storage" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.417692 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.442764 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qddw"] Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.494501 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-catalog-content\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.494550 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-utilities\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.494657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblfj\" (UniqueName: \"kubernetes.io/projected/7354b45e-e337-400b-bfb1-86e543771171-kube-api-access-bblfj\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.596242 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-catalog-content\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.596322 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-utilities\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.596518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bblfj\" (UniqueName: \"kubernetes.io/projected/7354b45e-e337-400b-bfb1-86e543771171-kube-api-access-bblfj\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.596689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-catalog-content\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.597444 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-utilities\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.617408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bblfj\" (UniqueName: \"kubernetes.io/projected/7354b45e-e337-400b-bfb1-86e543771171-kube-api-access-bblfj\") pod \"community-operators-9qddw\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:57 crc kubenswrapper[4958]: I1008 07:56:57.756908 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:56:58 crc kubenswrapper[4958]: I1008 07:56:58.283359 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qddw"] Oct 08 07:56:58 crc kubenswrapper[4958]: W1008 07:56:58.285533 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7354b45e_e337_400b_bfb1_86e543771171.slice/crio-7d95d14a4a38bf28e8a4262d621225ea957c1d73e308aa5da6a3dec8cad86757 WatchSource:0}: Error finding container 7d95d14a4a38bf28e8a4262d621225ea957c1d73e308aa5da6a3dec8cad86757: Status 404 returned error can't find the container with id 7d95d14a4a38bf28e8a4262d621225ea957c1d73e308aa5da6a3dec8cad86757 Oct 08 07:56:58 crc kubenswrapper[4958]: I1008 07:56:58.924493 4958 generic.go:334] "Generic (PLEG): container finished" podID="7354b45e-e337-400b-bfb1-86e543771171" containerID="a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3" exitCode=0 Oct 08 07:56:58 crc kubenswrapper[4958]: I1008 07:56:58.924573 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qddw" event={"ID":"7354b45e-e337-400b-bfb1-86e543771171","Type":"ContainerDied","Data":"a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3"} Oct 08 07:56:58 crc kubenswrapper[4958]: I1008 07:56:58.924832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qddw" event={"ID":"7354b45e-e337-400b-bfb1-86e543771171","Type":"ContainerStarted","Data":"7d95d14a4a38bf28e8a4262d621225ea957c1d73e308aa5da6a3dec8cad86757"} Oct 08 07:57:00 crc kubenswrapper[4958]: I1008 07:57:00.948492 4958 generic.go:334] "Generic (PLEG): container finished" podID="7354b45e-e337-400b-bfb1-86e543771171" containerID="47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d" exitCode=0 Oct 08 07:57:00 crc kubenswrapper[4958]: I1008 07:57:00.948586 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qddw" event={"ID":"7354b45e-e337-400b-bfb1-86e543771171","Type":"ContainerDied","Data":"47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d"} Oct 08 07:57:02 crc kubenswrapper[4958]: I1008 07:57:02.974661 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qddw" event={"ID":"7354b45e-e337-400b-bfb1-86e543771171","Type":"ContainerStarted","Data":"79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072"} Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.564197 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9qddw" podStartSLOduration=8.113477919 podStartE2EDuration="10.564180317s" podCreationTimestamp="2025-10-08 07:56:57 +0000 UTC" firstStartedPulling="2025-10-08 07:56:58.926838023 +0000 UTC m=+4962.056530624" lastFinishedPulling="2025-10-08 07:57:01.377540391 +0000 UTC m=+4964.507233022" observedRunningTime="2025-10-08 07:57:03.003759744 +0000 UTC m=+4966.133452385" watchObservedRunningTime="2025-10-08 07:57:07.564180317 +0000 UTC m=+4970.693872918" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.568114 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-vqdhs"] Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.569338 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.571234 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.571431 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.572363 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.572840 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rbqcq" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.573079 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.573085 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-678578b8df-ltnjh"] Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.574793 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.583262 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-ltnjh"] Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.597099 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-vqdhs"] Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.677442 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c993dc-331e-4b59-a366-721f33dde694-config\") pod \"dnsmasq-dns-678578b8df-ltnjh\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.677493 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-config\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.677521 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.677573 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hppcg\" (UniqueName: \"kubernetes.io/projected/47c993dc-331e-4b59-a366-721f33dde694-kube-api-access-hppcg\") pod \"dnsmasq-dns-678578b8df-ltnjh\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.677630 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tf2j\" (UniqueName: \"kubernetes.io/projected/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-kube-api-access-5tf2j\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.758184 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.758229 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.779706 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-config\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.779760 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.779793 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hppcg\" (UniqueName: \"kubernetes.io/projected/47c993dc-331e-4b59-a366-721f33dde694-kube-api-access-hppcg\") pod \"dnsmasq-dns-678578b8df-ltnjh\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.779842 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tf2j\" (UniqueName: \"kubernetes.io/projected/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-kube-api-access-5tf2j\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.779906 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c993dc-331e-4b59-a366-721f33dde694-config\") pod \"dnsmasq-dns-678578b8df-ltnjh\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.780702 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c993dc-331e-4b59-a366-721f33dde694-config\") pod \"dnsmasq-dns-678578b8df-ltnjh\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.781430 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-config\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.782045 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-dns-svc\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.801885 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tf2j\" (UniqueName: \"kubernetes.io/projected/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-kube-api-access-5tf2j\") pod \"dnsmasq-dns-6b8f87f5c5-vqdhs\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.817860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hppcg\" (UniqueName: \"kubernetes.io/projected/47c993dc-331e-4b59-a366-721f33dde694-kube-api-access-hppcg\") pod \"dnsmasq-dns-678578b8df-ltnjh\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.835033 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.895415 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.905662 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:07 crc kubenswrapper[4958]: I1008 07:57:07.978754 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-ltnjh"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.000086 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f98b87f9-hdvkb"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.003496 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.008735 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f98b87f9-hdvkb"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.079843 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.092860 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8q7x\" (UniqueName: \"kubernetes.io/projected/b171b064-9ebd-41ad-b423-2a46649fee8a-kube-api-access-g8q7x\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.092984 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-dns-svc\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.093064 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-config\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.190447 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qddw"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.193894 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8q7x\" (UniqueName: \"kubernetes.io/projected/b171b064-9ebd-41ad-b423-2a46649fee8a-kube-api-access-g8q7x\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.194028 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-dns-svc\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.194119 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-config\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.195026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-config\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.195028 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-dns-svc\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.218038 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8q7x\" (UniqueName: \"kubernetes.io/projected/b171b064-9ebd-41ad-b423-2a46649fee8a-kube-api-access-g8q7x\") pod \"dnsmasq-dns-85f98b87f9-hdvkb\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.264613 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-vqdhs"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.285341 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-c4zq5"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.286452 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.299859 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-c4zq5"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.327667 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.396728 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-config\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.396776 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.396813 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hwbl\" (UniqueName: \"kubernetes.io/projected/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-kube-api-access-4hwbl\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.465348 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-vqdhs"] Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.497723 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-config\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.498071 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.498134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hwbl\" (UniqueName: \"kubernetes.io/projected/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-kube-api-access-4hwbl\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.498514 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-config\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.505324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-dns-svc\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.517887 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hwbl\" (UniqueName: \"kubernetes.io/projected/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-kube-api-access-4hwbl\") pod \"dnsmasq-dns-67d9f7fb89-c4zq5\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.549265 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-ltnjh"] Oct 08 07:57:08 crc kubenswrapper[4958]: W1008 07:57:08.561131 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c993dc_331e_4b59_a366_721f33dde694.slice/crio-0493435bc7b0705fff9d93ba5f356060c6e94bda3482ff47e4150ed956c3a3bf WatchSource:0}: Error finding container 0493435bc7b0705fff9d93ba5f356060c6e94bda3482ff47e4150ed956c3a3bf: Status 404 returned error can't find the container with id 0493435bc7b0705fff9d93ba5f356060c6e94bda3482ff47e4150ed956c3a3bf Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.617278 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:08 crc kubenswrapper[4958]: I1008 07:57:08.768133 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f98b87f9-hdvkb"] Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.081104 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" containerID="5b38dc7ad86e39b3111bf33945547f9f5e78eabdbb502d5310b5abaca9c5b543" exitCode=0 Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.081159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" event={"ID":"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb","Type":"ContainerDied","Data":"5b38dc7ad86e39b3111bf33945547f9f5e78eabdbb502d5310b5abaca9c5b543"} Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.081183 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" event={"ID":"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb","Type":"ContainerStarted","Data":"dd7f8b05d14662455f1f44b0a3e579f6afad0bddf3a30621ec11965ad07e1448"} Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.084896 4958 generic.go:334] "Generic (PLEG): container finished" podID="47c993dc-331e-4b59-a366-721f33dde694" containerID="30efc03e86ba26313caa4724fc223a0844f2f127d5f1bd6af56da4e9c5fe25c7" exitCode=0 Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.084940 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" event={"ID":"47c993dc-331e-4b59-a366-721f33dde694","Type":"ContainerDied","Data":"30efc03e86ba26313caa4724fc223a0844f2f127d5f1bd6af56da4e9c5fe25c7"} Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.084974 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" event={"ID":"47c993dc-331e-4b59-a366-721f33dde694","Type":"ContainerStarted","Data":"0493435bc7b0705fff9d93ba5f356060c6e94bda3482ff47e4150ed956c3a3bf"} Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.091178 4958 generic.go:334] "Generic (PLEG): container finished" podID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerID="37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868" exitCode=0 Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.092089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" event={"ID":"b171b064-9ebd-41ad-b423-2a46649fee8a","Type":"ContainerDied","Data":"37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868"} Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.092115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" event={"ID":"b171b064-9ebd-41ad-b423-2a46649fee8a","Type":"ContainerStarted","Data":"53bf56844e51e2937cb160f83c4e1aa1df9f4f88ddea0a41232d86ec29105aef"} Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.112047 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-c4zq5"] Oct 08 07:57:09 crc kubenswrapper[4958]: W1008 07:57:09.121866 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef104f2_a6b1_403e_83ad_fadf6c5eb0b0.slice/crio-51b128f9bc488cae4d72e4a550a3338cbf8696f5f5253a739a7f4249594c246f WatchSource:0}: Error finding container 51b128f9bc488cae4d72e4a550a3338cbf8696f5f5253a739a7f4249594c246f: Status 404 returned error can't find the container with id 51b128f9bc488cae4d72e4a550a3338cbf8696f5f5253a739a7f4249594c246f Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.189246 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.192907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.195020 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.195222 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.196356 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.196552 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.199011 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4rnpb" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.199057 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.199227 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.207938 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwtm\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-kube-api-access-8cwtm\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322479 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f820f217-c21a-4181-948d-39ed69fa35f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322513 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322529 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f820f217-c21a-4181-948d-39ed69fa35f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322553 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322676 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322763 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.322791 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: E1008 07:57:09.334359 4958 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 08 07:57:09 crc kubenswrapper[4958]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b171b064-9ebd-41ad-b423-2a46649fee8a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 08 07:57:09 crc kubenswrapper[4958]: > podSandboxID="53bf56844e51e2937cb160f83c4e1aa1df9f4f88ddea0a41232d86ec29105aef" Oct 08 07:57:09 crc kubenswrapper[4958]: E1008 07:57:09.334514 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 08 07:57:09 crc kubenswrapper[4958]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8q7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85f98b87f9-hdvkb_openstack(b171b064-9ebd-41ad-b423-2a46649fee8a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b171b064-9ebd-41ad-b423-2a46649fee8a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 08 07:57:09 crc kubenswrapper[4958]: > logger="UnhandledError" Oct 08 07:57:09 crc kubenswrapper[4958]: E1008 07:57:09.335870 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b171b064-9ebd-41ad-b423-2a46649fee8a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.367772 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.409563 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:57:09 crc kubenswrapper[4958]: E1008 07:57:09.409905 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" containerName="init" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.409917 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" containerName="init" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.410092 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" containerName="init" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.410970 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.414380 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.414598 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.414715 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.414823 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.416699 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qmq7h" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.416990 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.417181 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424281 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-config\") pod \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424387 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tf2j\" (UniqueName: \"kubernetes.io/projected/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-kube-api-access-5tf2j\") pod \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424498 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-dns-svc\") pod \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\" (UID: \"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb\") " Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424742 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424784 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424816 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424871 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424893 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cwtm\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-kube-api-access-8cwtm\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.424998 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f820f217-c21a-4181-948d-39ed69fa35f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.425036 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.425066 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f820f217-c21a-4181-948d-39ed69fa35f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.425097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.425130 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.425875 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.430787 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.431686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.432442 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.436878 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.436976 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/838f9b06f8cd5713fa538049742b405736cab240358b8419d305ed933d790a4f/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.439083 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.439161 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.439837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f820f217-c21a-4181-948d-39ed69fa35f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.443653 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-kube-api-access-5tf2j" (OuterVolumeSpecName: "kube-api-access-5tf2j") pod "f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" (UID: "f4a866f0-4b27-4aa1-9e34-2a0d45b25acb"). InnerVolumeSpecName "kube-api-access-5tf2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.444274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.444515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.451907 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f820f217-c21a-4181-948d-39ed69fa35f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.454622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cwtm\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-kube-api-access-8cwtm\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.458703 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" (UID: "f4a866f0-4b27-4aa1-9e34-2a0d45b25acb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.461656 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.490487 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-config" (OuterVolumeSpecName: "config") pod "f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" (UID: "f4a866f0-4b27-4aa1-9e34-2a0d45b25acb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.495670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.517498 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.527851 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hppcg\" (UniqueName: \"kubernetes.io/projected/47c993dc-331e-4b59-a366-721f33dde694-kube-api-access-hppcg\") pod \"47c993dc-331e-4b59-a366-721f33dde694\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c993dc-331e-4b59-a366-721f33dde694-config\") pod \"47c993dc-331e-4b59-a366-721f33dde694\" (UID: \"47c993dc-331e-4b59-a366-721f33dde694\") " Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2902652a-2f4f-4747-a5e8-7a665519ac85-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528167 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528187 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2902652a-2f4f-4747-a5e8-7a665519ac85-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528224 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528240 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528258 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgtf\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-kube-api-access-xqgtf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528330 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528357 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528405 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tf2j\" (UniqueName: \"kubernetes.io/projected/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-kube-api-access-5tf2j\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528417 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.528425 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb-config\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.535097 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c993dc-331e-4b59-a366-721f33dde694-kube-api-access-hppcg" (OuterVolumeSpecName: "kube-api-access-hppcg") pod "47c993dc-331e-4b59-a366-721f33dde694" (UID: "47c993dc-331e-4b59-a366-721f33dde694"). InnerVolumeSpecName "kube-api-access-hppcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.544058 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c993dc-331e-4b59-a366-721f33dde694-config" (OuterVolumeSpecName: "config") pod "47c993dc-331e-4b59-a366-721f33dde694" (UID: "47c993dc-331e-4b59-a366-721f33dde694"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.629413 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2902652a-2f4f-4747-a5e8-7a665519ac85-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631406 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631435 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2902652a-2f4f-4747-a5e8-7a665519ac85-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631557 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631594 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgtf\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-kube-api-access-xqgtf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631629 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631709 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hppcg\" (UniqueName: \"kubernetes.io/projected/47c993dc-331e-4b59-a366-721f33dde694-kube-api-access-hppcg\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.631725 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47c993dc-331e-4b59-a366-721f33dde694-config\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.630740 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.633657 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.633686 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a60e785c8d95b4dad54350c00907fea31a95f5b52fcb2fa89a57b6dd2b51921c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.633740 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.635737 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.635904 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2902652a-2f4f-4747-a5e8-7a665519ac85-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.636133 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.636751 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.637055 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2902652a-2f4f-4747-a5e8-7a665519ac85-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.637254 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.642681 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.646975 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgtf\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-kube-api-access-xqgtf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.686176 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.740602 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:57:09 crc kubenswrapper[4958]: W1008 07:57:09.747533 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf820f217_c21a_4181_948d_39ed69fa35f5.slice/crio-e7c93c63ef92d7e06ceb45745f30a0a24aa10f367fb16b2ae3713b5a259a77ab WatchSource:0}: Error finding container e7c93c63ef92d7e06ceb45745f30a0a24aa10f367fb16b2ae3713b5a259a77ab: Status 404 returned error can't find the container with id e7c93c63ef92d7e06ceb45745f30a0a24aa10f367fb16b2ae3713b5a259a77ab Oct 08 07:57:09 crc kubenswrapper[4958]: I1008 07:57:09.763241 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.110100 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f820f217-c21a-4181-948d-39ed69fa35f5","Type":"ContainerStarted","Data":"e7c93c63ef92d7e06ceb45745f30a0a24aa10f367fb16b2ae3713b5a259a77ab"} Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.114308 4958 generic.go:334] "Generic (PLEG): container finished" podID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerID="fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4" exitCode=0 Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.114373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" event={"ID":"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0","Type":"ContainerDied","Data":"fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4"} Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.114390 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" event={"ID":"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0","Type":"ContainerStarted","Data":"51b128f9bc488cae4d72e4a550a3338cbf8696f5f5253a739a7f4249594c246f"} Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.117132 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.117174 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8f87f5c5-vqdhs" event={"ID":"f4a866f0-4b27-4aa1-9e34-2a0d45b25acb","Type":"ContainerDied","Data":"dd7f8b05d14662455f1f44b0a3e579f6afad0bddf3a30621ec11965ad07e1448"} Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.117227 4958 scope.go:117] "RemoveContainer" containerID="5b38dc7ad86e39b3111bf33945547f9f5e78eabdbb502d5310b5abaca9c5b543" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.122399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" event={"ID":"47c993dc-331e-4b59-a366-721f33dde694","Type":"ContainerDied","Data":"0493435bc7b0705fff9d93ba5f356060c6e94bda3482ff47e4150ed956c3a3bf"} Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.122563 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678578b8df-ltnjh" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.123272 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9qddw" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="registry-server" containerID="cri-o://79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072" gracePeriod=2 Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.254847 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-vqdhs"] Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.266228 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b8f87f5c5-vqdhs"] Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.287849 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.325819 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-ltnjh"] Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.330487 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-678578b8df-ltnjh"] Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.332357 4958 scope.go:117] "RemoveContainer" containerID="30efc03e86ba26313caa4724fc223a0844f2f127d5f1bd6af56da4e9c5fe25c7" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.550184 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 08 07:57:10 crc kubenswrapper[4958]: E1008 07:57:10.550822 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c993dc-331e-4b59-a366-721f33dde694" containerName="init" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.550843 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c993dc-331e-4b59-a366-721f33dde694" containerName="init" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.551069 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c993dc-331e-4b59-a366-721f33dde694" containerName="init" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.552088 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.555518 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.558202 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.558450 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.558461 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.558721 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.558940 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hntsh" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.562043 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.653916 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.653979 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654007 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-secrets\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654455 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654588 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqcj\" (UniqueName: \"kubernetes.io/projected/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-kube-api-access-7dqcj\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654660 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.654718 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756235 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756334 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756358 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-secrets\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dqcj\" (UniqueName: \"kubernetes.io/projected/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-kube-api-access-7dqcj\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756474 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.756532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.758686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.758941 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.759808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.762440 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.774598 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.774632 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07e2f74747243d54c5cfce1989e2c89b735e6e9f257b70fe92ab90ae72305a72/globalmount\"" pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.832576 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-secrets\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.832865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.833197 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:10 crc kubenswrapper[4958]: I1008 07:57:10.834403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dqcj\" (UniqueName: \"kubernetes.io/projected/bf3d8da5-1056-41dd-b7d3-2c17a02dde4b-kube-api-access-7dqcj\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.068367 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c406cbf-ea95-40a9-b540-bfe2c048a8d9\") pod \"openstack-galera-0\" (UID: \"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b\") " pod="openstack/openstack-galera-0" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.129306 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.131450 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" event={"ID":"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0","Type":"ContainerStarted","Data":"e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882"} Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.131591 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.133587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" event={"ID":"b171b064-9ebd-41ad-b423-2a46649fee8a","Type":"ContainerStarted","Data":"34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82"} Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.134258 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.144595 4958 generic.go:334] "Generic (PLEG): container finished" podID="7354b45e-e337-400b-bfb1-86e543771171" containerID="79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072" exitCode=0 Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.144639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qddw" event={"ID":"7354b45e-e337-400b-bfb1-86e543771171","Type":"ContainerDied","Data":"79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072"} Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.144712 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qddw" event={"ID":"7354b45e-e337-400b-bfb1-86e543771171","Type":"ContainerDied","Data":"7d95d14a4a38bf28e8a4262d621225ea957c1d73e308aa5da6a3dec8cad86757"} Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.144745 4958 scope.go:117] "RemoveContainer" containerID="79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.144768 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qddw" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.145848 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2902652a-2f4f-4747-a5e8-7a665519ac85","Type":"ContainerStarted","Data":"aef967874997d59da9ca3735a906882714f89851321fd866f9cb81b99287262e"} Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.147412 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f820f217-c21a-4181-948d-39ed69fa35f5","Type":"ContainerStarted","Data":"7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a"} Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.169929 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.173888 4958 scope.go:117] "RemoveContainer" containerID="47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.194654 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" podStartSLOduration=4.194637278 podStartE2EDuration="4.194637278s" podCreationTimestamp="2025-10-08 07:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:11.18733191 +0000 UTC m=+4974.317024551" watchObservedRunningTime="2025-10-08 07:57:11.194637278 +0000 UTC m=+4974.324329879" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.215906 4958 scope.go:117] "RemoveContainer" containerID="a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.265267 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-catalog-content\") pod \"7354b45e-e337-400b-bfb1-86e543771171\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.265348 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-utilities\") pod \"7354b45e-e337-400b-bfb1-86e543771171\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.265419 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bblfj\" (UniqueName: \"kubernetes.io/projected/7354b45e-e337-400b-bfb1-86e543771171-kube-api-access-bblfj\") pod \"7354b45e-e337-400b-bfb1-86e543771171\" (UID: \"7354b45e-e337-400b-bfb1-86e543771171\") " Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.267459 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-utilities" (OuterVolumeSpecName: "utilities") pod "7354b45e-e337-400b-bfb1-86e543771171" (UID: "7354b45e-e337-400b-bfb1-86e543771171"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.308439 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7354b45e-e337-400b-bfb1-86e543771171" (UID: "7354b45e-e337-400b-bfb1-86e543771171"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.367934 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.368266 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7354b45e-e337-400b-bfb1-86e543771171-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.589568 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c993dc-331e-4b59-a366-721f33dde694" path="/var/lib/kubelet/pods/47c993dc-331e-4b59-a366-721f33dde694/volumes" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.590064 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a866f0-4b27-4aa1-9e34-2a0d45b25acb" path="/var/lib/kubelet/pods/f4a866f0-4b27-4aa1-9e34-2a0d45b25acb/volumes" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.682127 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" podStartSLOduration=3.682106257 podStartE2EDuration="3.682106257s" podCreationTimestamp="2025-10-08 07:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:11.235556946 +0000 UTC m=+4974.365249547" watchObservedRunningTime="2025-10-08 07:57:11.682106257 +0000 UTC m=+4974.811798868" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.733591 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7354b45e-e337-400b-bfb1-86e543771171-kube-api-access-bblfj" (OuterVolumeSpecName: "kube-api-access-bblfj") pod "7354b45e-e337-400b-bfb1-86e543771171" (UID: "7354b45e-e337-400b-bfb1-86e543771171"). InnerVolumeSpecName "kube-api-access-bblfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:57:11 crc kubenswrapper[4958]: I1008 07:57:11.777520 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bblfj\" (UniqueName: \"kubernetes.io/projected/7354b45e-e337-400b-bfb1-86e543771171-kube-api-access-bblfj\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:11 crc kubenswrapper[4958]: W1008 07:57:11.838922 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf3d8da5_1056_41dd_b7d3_2c17a02dde4b.slice/crio-a786ec2c9b8691e77b21a8af18cfd318faaaef90b95d53922559f80e2c72e3f3 WatchSource:0}: Error finding container a786ec2c9b8691e77b21a8af18cfd318faaaef90b95d53922559f80e2c72e3f3: Status 404 returned error can't find the container with id a786ec2c9b8691e77b21a8af18cfd318faaaef90b95d53922559f80e2c72e3f3 Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.005901 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.006059 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 07:57:12 crc kubenswrapper[4958]: E1008 07:57:12.006707 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="registry-server" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.006776 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="registry-server" Oct 08 07:57:12 crc kubenswrapper[4958]: E1008 07:57:12.006819 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="extract-utilities" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.006868 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="extract-utilities" Oct 08 07:57:12 crc kubenswrapper[4958]: E1008 07:57:12.006902 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="extract-content" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.006964 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="extract-content" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.007490 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7354b45e-e337-400b-bfb1-86e543771171" containerName="registry-server" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.009407 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.009537 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.013082 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xm7dz" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.013930 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.014151 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.014274 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.022898 4958 scope.go:117] "RemoveContainer" containerID="79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072" Oct 08 07:57:12 crc kubenswrapper[4958]: E1008 07:57:12.023526 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072\": container with ID starting with 79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072 not found: ID does not exist" containerID="79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.023580 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072"} err="failed to get container status \"79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072\": rpc error: code = NotFound desc = could not find container \"79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072\": container with ID starting with 79dce25b996fd88c0cb4cce7018fd4d1192622c3d809cecd05f13ae919cc3072 not found: ID does not exist" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.023614 4958 scope.go:117] "RemoveContainer" containerID="47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d" Oct 08 07:57:12 crc kubenswrapper[4958]: E1008 07:57:12.024150 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d\": container with ID starting with 47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d not found: ID does not exist" containerID="47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.024191 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d"} err="failed to get container status \"47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d\": rpc error: code = NotFound desc = could not find container \"47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d\": container with ID starting with 47ba1f7574b1e9652cc69c26adbc3d659d9b95189f630fee165b841111f7a93d not found: ID does not exist" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.024223 4958 scope.go:117] "RemoveContainer" containerID="a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3" Oct 08 07:57:12 crc kubenswrapper[4958]: E1008 07:57:12.024675 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3\": container with ID starting with a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3 not found: ID does not exist" containerID="a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.024714 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3"} err="failed to get container status \"a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3\": rpc error: code = NotFound desc = could not find container \"a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3\": container with ID starting with a8aba42cd24ff6c56ccddda7fd5806e5e841c5e2d909119d029c3b8ac17d73c3 not found: ID does not exist" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.124852 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9qddw"] Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.133890 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9qddw"] Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.158887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2902652a-2f4f-4747-a5e8-7a665519ac85","Type":"ContainerStarted","Data":"4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83"} Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.165551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b","Type":"ContainerStarted","Data":"a786ec2c9b8691e77b21a8af18cfd318faaaef90b95d53922559f80e2c72e3f3"} Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.185903 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.185991 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eadd9987-ad00-49d2-be85-da766dc46f50-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186106 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186135 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fec5758-137d-4fb3-a4ad-348738649900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fec5758-137d-4fb3-a4ad-348738649900\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186163 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186207 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8cs7\" (UniqueName: \"kubernetes.io/projected/eadd9987-ad00-49d2-be85-da766dc46f50-kube-api-access-d8cs7\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.186262 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.246339 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.247281 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.249751 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.250011 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-pdwww" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.254618 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.257796 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.287834 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.287940 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8cs7\" (UniqueName: \"kubernetes.io/projected/eadd9987-ad00-49d2-be85-da766dc46f50-kube-api-access-d8cs7\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288169 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eadd9987-ad00-49d2-be85-da766dc46f50-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288239 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.288318 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fec5758-137d-4fb3-a4ad-348738649900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fec5758-137d-4fb3-a4ad-348738649900\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.289765 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.290677 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eadd9987-ad00-49d2-be85-da766dc46f50-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.290673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.291714 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eadd9987-ad00-49d2-be85-da766dc46f50-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.293315 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.293353 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fec5758-137d-4fb3-a4ad-348738649900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fec5758-137d-4fb3-a4ad-348738649900\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7c6934a0004b10738320ba3b342d037fd83c41d6b18c82d0e6749190e13a3f0/globalmount\"" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.298634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.298665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.298810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/eadd9987-ad00-49d2-be85-da766dc46f50-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.305552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8cs7\" (UniqueName: \"kubernetes.io/projected/eadd9987-ad00-49d2-be85-da766dc46f50-kube-api-access-d8cs7\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.329203 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fec5758-137d-4fb3-a4ad-348738649900\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fec5758-137d-4fb3-a4ad-348738649900\") pod \"openstack-cell1-galera-0\" (UID: \"eadd9987-ad00-49d2-be85-da766dc46f50\") " pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.389796 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acdf3e24-be4c-4a93-b803-a80604b2b4c2-kolla-config\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.389878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956mh\" (UniqueName: \"kubernetes.io/projected/acdf3e24-be4c-4a93-b803-a80604b2b4c2-kube-api-access-956mh\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.389910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdf3e24-be4c-4a93-b803-a80604b2b4c2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.389984 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf3e24-be4c-4a93-b803-a80604b2b4c2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.390014 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acdf3e24-be4c-4a93-b803-a80604b2b4c2-config-data\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.427283 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.491716 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956mh\" (UniqueName: \"kubernetes.io/projected/acdf3e24-be4c-4a93-b803-a80604b2b4c2-kube-api-access-956mh\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.492324 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdf3e24-be4c-4a93-b803-a80604b2b4c2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.492969 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf3e24-be4c-4a93-b803-a80604b2b4c2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.493147 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acdf3e24-be4c-4a93-b803-a80604b2b4c2-config-data\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.493254 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acdf3e24-be4c-4a93-b803-a80604b2b4c2-kolla-config\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.494534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acdf3e24-be4c-4a93-b803-a80604b2b4c2-kolla-config\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.494735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acdf3e24-be4c-4a93-b803-a80604b2b4c2-config-data\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.495643 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdf3e24-be4c-4a93-b803-a80604b2b4c2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.497406 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdf3e24-be4c-4a93-b803-a80604b2b4c2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.512183 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956mh\" (UniqueName: \"kubernetes.io/projected/acdf3e24-be4c-4a93-b803-a80604b2b4c2-kube-api-access-956mh\") pod \"memcached-0\" (UID: \"acdf3e24-be4c-4a93-b803-a80604b2b4c2\") " pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.563280 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 08 07:57:12 crc kubenswrapper[4958]: I1008 07:57:12.877809 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 08 07:57:13 crc kubenswrapper[4958]: I1008 07:57:13.007441 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 08 07:57:13 crc kubenswrapper[4958]: W1008 07:57:13.014569 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdf3e24_be4c_4a93_b803_a80604b2b4c2.slice/crio-3af977c5895416cc116d8e03621b5fadab583665098bd659b2b5c1e1efbd303d WatchSource:0}: Error finding container 3af977c5895416cc116d8e03621b5fadab583665098bd659b2b5c1e1efbd303d: Status 404 returned error can't find the container with id 3af977c5895416cc116d8e03621b5fadab583665098bd659b2b5c1e1efbd303d Oct 08 07:57:13 crc kubenswrapper[4958]: I1008 07:57:13.184907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acdf3e24-be4c-4a93-b803-a80604b2b4c2","Type":"ContainerStarted","Data":"3af977c5895416cc116d8e03621b5fadab583665098bd659b2b5c1e1efbd303d"} Oct 08 07:57:13 crc kubenswrapper[4958]: I1008 07:57:13.186373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eadd9987-ad00-49d2-be85-da766dc46f50","Type":"ContainerStarted","Data":"d1cb4be512b7f3121f4ee0908ba693a031bfbd4aa8cb3361670d7719c6dfeb8c"} Oct 08 07:57:13 crc kubenswrapper[4958]: I1008 07:57:13.187899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b","Type":"ContainerStarted","Data":"e71a25f08c3aef993f14e10cfec5d27d5211d2b79daae6b3b3405e4aac920b39"} Oct 08 07:57:13 crc kubenswrapper[4958]: I1008 07:57:13.593689 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7354b45e-e337-400b-bfb1-86e543771171" path="/var/lib/kubelet/pods/7354b45e-e337-400b-bfb1-86e543771171/volumes" Oct 08 07:57:14 crc kubenswrapper[4958]: I1008 07:57:14.199655 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"acdf3e24-be4c-4a93-b803-a80604b2b4c2","Type":"ContainerStarted","Data":"5f0872995ef62bd40f0a85620ec632f72a039645ea08534e519923a2d74cff10"} Oct 08 07:57:14 crc kubenswrapper[4958]: I1008 07:57:14.199817 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 08 07:57:14 crc kubenswrapper[4958]: I1008 07:57:14.201744 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eadd9987-ad00-49d2-be85-da766dc46f50","Type":"ContainerStarted","Data":"75a34a5d6e7fd34822974b879dbc0dbcab5bffd4e2f960382a91431ee0db0dec"} Oct 08 07:57:14 crc kubenswrapper[4958]: I1008 07:57:14.227346 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.227313195 podStartE2EDuration="2.227313195s" podCreationTimestamp="2025-10-08 07:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:14.222794673 +0000 UTC m=+4977.352487284" watchObservedRunningTime="2025-10-08 07:57:14.227313195 +0000 UTC m=+4977.357005836" Oct 08 07:57:16 crc kubenswrapper[4958]: I1008 07:57:16.225778 4958 generic.go:334] "Generic (PLEG): container finished" podID="bf3d8da5-1056-41dd-b7d3-2c17a02dde4b" containerID="e71a25f08c3aef993f14e10cfec5d27d5211d2b79daae6b3b3405e4aac920b39" exitCode=0 Oct 08 07:57:16 crc kubenswrapper[4958]: I1008 07:57:16.225878 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b","Type":"ContainerDied","Data":"e71a25f08c3aef993f14e10cfec5d27d5211d2b79daae6b3b3405e4aac920b39"} Oct 08 07:57:17 crc kubenswrapper[4958]: I1008 07:57:17.240157 4958 generic.go:334] "Generic (PLEG): container finished" podID="eadd9987-ad00-49d2-be85-da766dc46f50" containerID="75a34a5d6e7fd34822974b879dbc0dbcab5bffd4e2f960382a91431ee0db0dec" exitCode=0 Oct 08 07:57:17 crc kubenswrapper[4958]: I1008 07:57:17.240256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eadd9987-ad00-49d2-be85-da766dc46f50","Type":"ContainerDied","Data":"75a34a5d6e7fd34822974b879dbc0dbcab5bffd4e2f960382a91431ee0db0dec"} Oct 08 07:57:17 crc kubenswrapper[4958]: I1008 07:57:17.243790 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf3d8da5-1056-41dd-b7d3-2c17a02dde4b","Type":"ContainerStarted","Data":"6708bf062f7604914697a41d76e1a6e1a4319fdca206b067a11a150335460692"} Oct 08 07:57:17 crc kubenswrapper[4958]: I1008 07:57:17.319565 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.319535963 podStartE2EDuration="8.319535963s" podCreationTimestamp="2025-10-08 07:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:17.31533816 +0000 UTC m=+4980.445030791" watchObservedRunningTime="2025-10-08 07:57:17.319535963 +0000 UTC m=+4980.449228604" Oct 08 07:57:18 crc kubenswrapper[4958]: I1008 07:57:18.256179 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eadd9987-ad00-49d2-be85-da766dc46f50","Type":"ContainerStarted","Data":"1d05300ccbb5a166c56cf1d8bf31697297b0a556b975868768c2fd2cfd530648"} Oct 08 07:57:18 crc kubenswrapper[4958]: I1008 07:57:18.289805 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.289773814 podStartE2EDuration="8.289773814s" podCreationTimestamp="2025-10-08 07:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:18.285636962 +0000 UTC m=+4981.415329653" watchObservedRunningTime="2025-10-08 07:57:18.289773814 +0000 UTC m=+4981.419466455" Oct 08 07:57:18 crc kubenswrapper[4958]: I1008 07:57:18.329250 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:18 crc kubenswrapper[4958]: I1008 07:57:18.618241 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:57:18 crc kubenswrapper[4958]: I1008 07:57:18.690931 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f98b87f9-hdvkb"] Oct 08 07:57:19 crc kubenswrapper[4958]: I1008 07:57:19.263984 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerName="dnsmasq-dns" containerID="cri-o://34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82" gracePeriod=10 Oct 08 07:57:19 crc kubenswrapper[4958]: I1008 07:57:19.766017 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:19 crc kubenswrapper[4958]: I1008 07:57:19.935671 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-dns-svc\") pod \"b171b064-9ebd-41ad-b423-2a46649fee8a\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " Oct 08 07:57:19 crc kubenswrapper[4958]: I1008 07:57:19.935769 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8q7x\" (UniqueName: \"kubernetes.io/projected/b171b064-9ebd-41ad-b423-2a46649fee8a-kube-api-access-g8q7x\") pod \"b171b064-9ebd-41ad-b423-2a46649fee8a\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " Oct 08 07:57:19 crc kubenswrapper[4958]: I1008 07:57:19.935936 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-config\") pod \"b171b064-9ebd-41ad-b423-2a46649fee8a\" (UID: \"b171b064-9ebd-41ad-b423-2a46649fee8a\") " Oct 08 07:57:19 crc kubenswrapper[4958]: I1008 07:57:19.945491 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b171b064-9ebd-41ad-b423-2a46649fee8a-kube-api-access-g8q7x" (OuterVolumeSpecName: "kube-api-access-g8q7x") pod "b171b064-9ebd-41ad-b423-2a46649fee8a" (UID: "b171b064-9ebd-41ad-b423-2a46649fee8a"). InnerVolumeSpecName "kube-api-access-g8q7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.000219 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b171b064-9ebd-41ad-b423-2a46649fee8a" (UID: "b171b064-9ebd-41ad-b423-2a46649fee8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.004686 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-config" (OuterVolumeSpecName: "config") pod "b171b064-9ebd-41ad-b423-2a46649fee8a" (UID: "b171b064-9ebd-41ad-b423-2a46649fee8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.038065 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.038118 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8q7x\" (UniqueName: \"kubernetes.io/projected/b171b064-9ebd-41ad-b423-2a46649fee8a-kube-api-access-g8q7x\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.038142 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b171b064-9ebd-41ad-b423-2a46649fee8a-config\") on node \"crc\" DevicePath \"\"" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.277770 4958 generic.go:334] "Generic (PLEG): container finished" podID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerID="34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82" exitCode=0 Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.277851 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.277844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" event={"ID":"b171b064-9ebd-41ad-b423-2a46649fee8a","Type":"ContainerDied","Data":"34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82"} Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.278062 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f98b87f9-hdvkb" event={"ID":"b171b064-9ebd-41ad-b423-2a46649fee8a","Type":"ContainerDied","Data":"53bf56844e51e2937cb160f83c4e1aa1df9f4f88ddea0a41232d86ec29105aef"} Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.278095 4958 scope.go:117] "RemoveContainer" containerID="34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.334042 4958 scope.go:117] "RemoveContainer" containerID="37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.338316 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f98b87f9-hdvkb"] Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.357260 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85f98b87f9-hdvkb"] Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.379555 4958 scope.go:117] "RemoveContainer" containerID="34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82" Oct 08 07:57:20 crc kubenswrapper[4958]: E1008 07:57:20.379852 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82\": container with ID starting with 34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82 not found: ID does not exist" containerID="34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.379904 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82"} err="failed to get container status \"34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82\": rpc error: code = NotFound desc = could not find container \"34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82\": container with ID starting with 34b1e859672f499a879cb3c81fb27ce19a0458fc583a9980756cd837e7131f82 not found: ID does not exist" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.379939 4958 scope.go:117] "RemoveContainer" containerID="37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868" Oct 08 07:57:20 crc kubenswrapper[4958]: E1008 07:57:20.380279 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868\": container with ID starting with 37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868 not found: ID does not exist" containerID="37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868" Oct 08 07:57:20 crc kubenswrapper[4958]: I1008 07:57:20.380316 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868"} err="failed to get container status \"37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868\": rpc error: code = NotFound desc = could not find container \"37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868\": container with ID starting with 37b55c4fe1853e677d51d79d0ebee0d871566363a86d0acf5896039213f03868 not found: ID does not exist" Oct 08 07:57:21 crc kubenswrapper[4958]: I1008 07:57:21.170906 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 08 07:57:21 crc kubenswrapper[4958]: I1008 07:57:21.171031 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 08 07:57:21 crc kubenswrapper[4958]: I1008 07:57:21.243016 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 08 07:57:21 crc kubenswrapper[4958]: I1008 07:57:21.359423 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 08 07:57:21 crc kubenswrapper[4958]: I1008 07:57:21.593348 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" path="/var/lib/kubelet/pods/b171b064-9ebd-41ad-b423-2a46649fee8a/volumes" Oct 08 07:57:22 crc kubenswrapper[4958]: I1008 07:57:22.428318 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:22 crc kubenswrapper[4958]: I1008 07:57:22.428379 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:22 crc kubenswrapper[4958]: I1008 07:57:22.565991 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 08 07:57:24 crc kubenswrapper[4958]: I1008 07:57:24.509058 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:24 crc kubenswrapper[4958]: I1008 07:57:24.589561 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 08 07:57:44 crc kubenswrapper[4958]: I1008 07:57:44.527416 4958 generic.go:334] "Generic (PLEG): container finished" podID="f820f217-c21a-4181-948d-39ed69fa35f5" containerID="7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a" exitCode=0 Oct 08 07:57:44 crc kubenswrapper[4958]: I1008 07:57:44.527557 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f820f217-c21a-4181-948d-39ed69fa35f5","Type":"ContainerDied","Data":"7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a"} Oct 08 07:57:45 crc kubenswrapper[4958]: I1008 07:57:45.540457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f820f217-c21a-4181-948d-39ed69fa35f5","Type":"ContainerStarted","Data":"32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb"} Oct 08 07:57:45 crc kubenswrapper[4958]: I1008 07:57:45.541519 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 07:57:45 crc kubenswrapper[4958]: I1008 07:57:45.544624 4958 generic.go:334] "Generic (PLEG): container finished" podID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerID="4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83" exitCode=0 Oct 08 07:57:45 crc kubenswrapper[4958]: I1008 07:57:45.544692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2902652a-2f4f-4747-a5e8-7a665519ac85","Type":"ContainerDied","Data":"4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83"} Oct 08 07:57:45 crc kubenswrapper[4958]: I1008 07:57:45.583937 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.583880239 podStartE2EDuration="37.583880239s" podCreationTimestamp="2025-10-08 07:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:45.582195493 +0000 UTC m=+5008.711888174" watchObservedRunningTime="2025-10-08 07:57:45.583880239 +0000 UTC m=+5008.713572880" Oct 08 07:57:46 crc kubenswrapper[4958]: I1008 07:57:46.557198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2902652a-2f4f-4747-a5e8-7a665519ac85","Type":"ContainerStarted","Data":"0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d"} Oct 08 07:57:46 crc kubenswrapper[4958]: I1008 07:57:46.558372 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:57:46 crc kubenswrapper[4958]: I1008 07:57:46.580764 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.580750051 podStartE2EDuration="38.580750051s" podCreationTimestamp="2025-10-08 07:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:57:46.579255531 +0000 UTC m=+5009.708948162" watchObservedRunningTime="2025-10-08 07:57:46.580750051 +0000 UTC m=+5009.710442652" Oct 08 07:57:59 crc kubenswrapper[4958]: I1008 07:57:59.523287 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 07:57:59 crc kubenswrapper[4958]: I1008 07:57:59.766100 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.432023 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-z75mj"] Oct 08 07:58:05 crc kubenswrapper[4958]: E1008 07:58:05.433193 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerName="dnsmasq-dns" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.433235 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerName="dnsmasq-dns" Oct 08 07:58:05 crc kubenswrapper[4958]: E1008 07:58:05.433288 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerName="init" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.433303 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerName="init" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.433617 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b171b064-9ebd-41ad-b423-2a46649fee8a" containerName="dnsmasq-dns" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.435092 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.444359 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-z75mj"] Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.565393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwn7\" (UniqueName: \"kubernetes.io/projected/de169cef-7e09-4cb9-b565-2f5291ad55f8-kube-api-access-qdwn7\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.565653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-config\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.565722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.667424 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.667624 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwn7\" (UniqueName: \"kubernetes.io/projected/de169cef-7e09-4cb9-b565-2f5291ad55f8-kube-api-access-qdwn7\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.667712 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-config\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.669369 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-dns-svc\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.669684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-config\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.693993 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwn7\" (UniqueName: \"kubernetes.io/projected/de169cef-7e09-4cb9-b565-2f5291ad55f8-kube-api-access-qdwn7\") pod \"dnsmasq-dns-5fdc957c47-z75mj\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:05 crc kubenswrapper[4958]: I1008 07:58:05.759638 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.121095 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:58:06 crc kubenswrapper[4958]: W1008 07:58:06.238252 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde169cef_7e09_4cb9_b565_2f5291ad55f8.slice/crio-9027121a7b16144a6e4ec4c7f7ded33d1a9ffa4935d4ce2438e2e52cb1d90105 WatchSource:0}: Error finding container 9027121a7b16144a6e4ec4c7f7ded33d1a9ffa4935d4ce2438e2e52cb1d90105: Status 404 returned error can't find the container with id 9027121a7b16144a6e4ec4c7f7ded33d1a9ffa4935d4ce2438e2e52cb1d90105 Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.238264 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-z75mj"] Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.735602 4958 generic.go:334] "Generic (PLEG): container finished" podID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerID="2e96dc1ea4f8c70ed2944c76915c899b2108bf03109d0f594f8ef5fa624027a9" exitCode=0 Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.735653 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" event={"ID":"de169cef-7e09-4cb9-b565-2f5291ad55f8","Type":"ContainerDied","Data":"2e96dc1ea4f8c70ed2944c76915c899b2108bf03109d0f594f8ef5fa624027a9"} Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.735913 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" event={"ID":"de169cef-7e09-4cb9-b565-2f5291ad55f8","Type":"ContainerStarted","Data":"9027121a7b16144a6e4ec4c7f7ded33d1a9ffa4935d4ce2438e2e52cb1d90105"} Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.845281 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.845346 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:58:06 crc kubenswrapper[4958]: I1008 07:58:06.869659 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:58:07 crc kubenswrapper[4958]: I1008 07:58:07.744369 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" event={"ID":"de169cef-7e09-4cb9-b565-2f5291ad55f8","Type":"ContainerStarted","Data":"4301cca445a645afbc63dcd41edd11606362b15faab94927d1aa2ca6c68de1d6"} Oct 08 07:58:07 crc kubenswrapper[4958]: I1008 07:58:07.744714 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:07 crc kubenswrapper[4958]: I1008 07:58:07.767793 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" podStartSLOduration=2.767778673 podStartE2EDuration="2.767778673s" podCreationTimestamp="2025-10-08 07:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:58:07.762936232 +0000 UTC m=+5030.892628833" watchObservedRunningTime="2025-10-08 07:58:07.767778673 +0000 UTC m=+5030.897471274" Oct 08 07:58:10 crc kubenswrapper[4958]: I1008 07:58:10.827056 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" containerName="rabbitmq" containerID="cri-o://32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb" gracePeriod=604796 Oct 08 07:58:11 crc kubenswrapper[4958]: I1008 07:58:11.236550 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerName="rabbitmq" containerID="cri-o://0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d" gracePeriod=604796 Oct 08 07:58:15 crc kubenswrapper[4958]: I1008 07:58:15.762496 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 07:58:15 crc kubenswrapper[4958]: I1008 07:58:15.842184 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-c4zq5"] Oct 08 07:58:15 crc kubenswrapper[4958]: I1008 07:58:15.842537 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerName="dnsmasq-dns" containerID="cri-o://e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882" gracePeriod=10 Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.289447 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.353246 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-dns-svc\") pod \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.353326 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-config\") pod \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.353357 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hwbl\" (UniqueName: \"kubernetes.io/projected/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-kube-api-access-4hwbl\") pod \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\" (UID: \"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0\") " Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.362650 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-kube-api-access-4hwbl" (OuterVolumeSpecName: "kube-api-access-4hwbl") pod "3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" (UID: "3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0"). InnerVolumeSpecName "kube-api-access-4hwbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.411487 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-config" (OuterVolumeSpecName: "config") pod "3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" (UID: "3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.411590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" (UID: "3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.455457 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.455482 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-config\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.455494 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hwbl\" (UniqueName: \"kubernetes.io/projected/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0-kube-api-access-4hwbl\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.846396 4958 generic.go:334] "Generic (PLEG): container finished" podID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerID="e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882" exitCode=0 Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.846457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" event={"ID":"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0","Type":"ContainerDied","Data":"e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882"} Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.846495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" event={"ID":"3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0","Type":"ContainerDied","Data":"51b128f9bc488cae4d72e4a550a3338cbf8696f5f5253a739a7f4249594c246f"} Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.846524 4958 scope.go:117] "RemoveContainer" containerID="e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.846698 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d9f7fb89-c4zq5" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.879010 4958 scope.go:117] "RemoveContainer" containerID="fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.899035 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-c4zq5"] Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.908840 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67d9f7fb89-c4zq5"] Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.929746 4958 scope.go:117] "RemoveContainer" containerID="e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882" Oct 08 07:58:16 crc kubenswrapper[4958]: E1008 07:58:16.930458 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882\": container with ID starting with e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882 not found: ID does not exist" containerID="e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.931072 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882"} err="failed to get container status \"e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882\": rpc error: code = NotFound desc = could not find container \"e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882\": container with ID starting with e78215025da7a1e24ba2990fb2377631c754f2046a820c517986d44693070882 not found: ID does not exist" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.931114 4958 scope.go:117] "RemoveContainer" containerID="fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4" Oct 08 07:58:16 crc kubenswrapper[4958]: E1008 07:58:16.931697 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4\": container with ID starting with fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4 not found: ID does not exist" containerID="fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4" Oct 08 07:58:16 crc kubenswrapper[4958]: I1008 07:58:16.931746 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4"} err="failed to get container status \"fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4\": rpc error: code = NotFound desc = could not find container \"fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4\": container with ID starting with fed26a1798a2b75c10e314048e3273f386961b8eb7a9b0c32cc727ff676f01d4 not found: ID does not exist" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.592579 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" path="/var/lib/kubelet/pods/3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0/volumes" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.612028 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678050 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-server-conf\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-erlang-cookie\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678171 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-tls\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-plugins-conf\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678326 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f820f217-c21a-4181-948d-39ed69fa35f5-erlang-cookie-secret\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678386 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-confd\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678451 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-plugins\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678487 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-config-data\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678545 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cwtm\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-kube-api-access-8cwtm\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f820f217-c21a-4181-948d-39ed69fa35f5-pod-info\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.678800 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"f820f217-c21a-4181-948d-39ed69fa35f5\" (UID: \"f820f217-c21a-4181-948d-39ed69fa35f5\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.679577 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.681511 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.681711 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.688350 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f820f217-c21a-4181-948d-39ed69fa35f5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.704479 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-kube-api-access-8cwtm" (OuterVolumeSpecName: "kube-api-access-8cwtm") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "kube-api-access-8cwtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.705338 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f820f217-c21a-4181-948d-39ed69fa35f5-pod-info" (OuterVolumeSpecName: "pod-info") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.708243 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.718291 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a" (OuterVolumeSpecName: "persistence") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "pvc-96f50537-ee1c-44ca-a1a4-e9704662572a". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.726908 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-config-data" (OuterVolumeSpecName: "config-data") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781103 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781144 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781163 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f820f217-c21a-4181-948d-39ed69fa35f5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781181 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781201 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781218 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cwtm\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-kube-api-access-8cwtm\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781236 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f820f217-c21a-4181-948d-39ed69fa35f5-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.781292 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") on node \"crc\" " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.786284 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-server-conf" (OuterVolumeSpecName: "server-conf") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.800104 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.800430 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-96f50537-ee1c-44ca-a1a4-e9704662572a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a") on node "crc" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.823504 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.837772 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f820f217-c21a-4181-948d-39ed69fa35f5" (UID: "f820f217-c21a-4181-948d-39ed69fa35f5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.863484 4958 generic.go:334] "Generic (PLEG): container finished" podID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerID="0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d" exitCode=0 Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.863553 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2902652a-2f4f-4747-a5e8-7a665519ac85","Type":"ContainerDied","Data":"0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d"} Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.863585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2902652a-2f4f-4747-a5e8-7a665519ac85","Type":"ContainerDied","Data":"aef967874997d59da9ca3735a906882714f89851321fd866f9cb81b99287262e"} Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.863607 4958 scope.go:117] "RemoveContainer" containerID="0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.863759 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.867898 4958 generic.go:334] "Generic (PLEG): container finished" podID="f820f217-c21a-4181-948d-39ed69fa35f5" containerID="32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb" exitCode=0 Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.868151 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f820f217-c21a-4181-948d-39ed69fa35f5","Type":"ContainerDied","Data":"32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb"} Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.868489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f820f217-c21a-4181-948d-39ed69fa35f5","Type":"ContainerDied","Data":"e7c93c63ef92d7e06ceb45745f30a0a24aa10f367fb16b2ae3713b5a259a77ab"} Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.868303 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882410 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2902652a-2f4f-4747-a5e8-7a665519ac85-pod-info\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882450 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2902652a-2f4f-4747-a5e8-7a665519ac85-erlang-cookie-secret\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882477 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-tls\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgtf\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-kube-api-access-xqgtf\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882534 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-confd\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882573 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-plugins-conf\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882602 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-plugins\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-server-conf\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882710 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882736 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-config-data\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.882754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-erlang-cookie\") pod \"2902652a-2f4f-4747-a5e8-7a665519ac85\" (UID: \"2902652a-2f4f-4747-a5e8-7a665519ac85\") " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.883035 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.883046 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f820f217-c21a-4181-948d-39ed69fa35f5-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.883057 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f820f217-c21a-4181-948d-39ed69fa35f5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.883259 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.883625 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.884055 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.885878 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2902652a-2f4f-4747-a5e8-7a665519ac85-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.886325 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.889141 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2902652a-2f4f-4747-a5e8-7a665519ac85-pod-info" (OuterVolumeSpecName: "pod-info") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.892527 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-kube-api-access-xqgtf" (OuterVolumeSpecName: "kube-api-access-xqgtf") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "kube-api-access-xqgtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.892917 4958 scope.go:117] "RemoveContainer" containerID="4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.897678 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d" (OuterVolumeSpecName: "persistence") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.921148 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-config-data" (OuterVolumeSpecName: "config-data") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.928431 4958 scope.go:117] "RemoveContainer" containerID="0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.928565 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.928783 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d\": container with ID starting with 0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d not found: ID does not exist" containerID="0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.928819 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d"} err="failed to get container status \"0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d\": rpc error: code = NotFound desc = could not find container \"0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d\": container with ID starting with 0cb43f29388b064893a15d69931bd0a043736b3d35bf501eb5ae0f5cefe38d6d not found: ID does not exist" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.928847 4958 scope.go:117] "RemoveContainer" containerID="4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83" Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.929179 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83\": container with ID starting with 4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83 not found: ID does not exist" containerID="4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.929205 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83"} err="failed to get container status \"4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83\": rpc error: code = NotFound desc = could not find container \"4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83\": container with ID starting with 4733e0849369aa36125846f17c0b10c139327708de0851ac65044fcc72531e83 not found: ID does not exist" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.929223 4958 scope.go:117] "RemoveContainer" containerID="32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.936752 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.939654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-server-conf" (OuterVolumeSpecName: "server-conf") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.939744 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.940230 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerName="rabbitmq" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940250 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerName="rabbitmq" Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.940260 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" containerName="setup-container" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940268 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" containerName="setup-container" Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.940293 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerName="dnsmasq-dns" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940299 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerName="dnsmasq-dns" Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.940310 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerName="setup-container" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940315 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerName="setup-container" Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.940324 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" containerName="rabbitmq" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940331 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" containerName="rabbitmq" Oct 08 07:58:17 crc kubenswrapper[4958]: E1008 07:58:17.940345 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerName="init" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940350 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerName="init" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940481 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" containerName="rabbitmq" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940490 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" containerName="rabbitmq" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.940506 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef104f2-a6b1-403e-83ad-fadf6c5eb0b0" containerName="dnsmasq-dns" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.945200 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.950349 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.950608 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.950746 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.950869 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.951065 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4rnpb" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.951198 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.951341 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.954208 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.974052 4958 scope.go:117] "RemoveContainer" containerID="7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986490 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3d0e354-e762-4a61-b317-93aad016cc26-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986581 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986629 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3d0e354-e762-4a61-b317-93aad016cc26-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvlw2\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-kube-api-access-rvlw2\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986907 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2902652a-2f4f-4747-a5e8-7a665519ac85-pod-info\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986972 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2902652a-2f4f-4747-a5e8-7a665519ac85-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986982 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.986991 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgtf\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-kube-api-access-xqgtf\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.987003 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.987452 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.987489 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-server-conf\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.987516 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") on node \"crc\" " Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.987526 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2902652a-2f4f-4747-a5e8-7a665519ac85-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:17 crc kubenswrapper[4958]: I1008 07:58:17.987535 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.005875 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2902652a-2f4f-4747-a5e8-7a665519ac85" (UID: "2902652a-2f4f-4747-a5e8-7a665519ac85"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.006407 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.006547 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d") on node "crc" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.062807 4958 scope.go:117] "RemoveContainer" containerID="32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb" Oct 08 07:58:18 crc kubenswrapper[4958]: E1008 07:58:18.063289 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb\": container with ID starting with 32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb not found: ID does not exist" containerID="32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.063363 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb"} err="failed to get container status \"32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb\": rpc error: code = NotFound desc = could not find container \"32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb\": container with ID starting with 32fb05a7b570a2e8babfe61f77ad0bad9578f889353795b8226f18f306571adb not found: ID does not exist" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.063393 4958 scope.go:117] "RemoveContainer" containerID="7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a" Oct 08 07:58:18 crc kubenswrapper[4958]: E1008 07:58:18.063991 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a\": container with ID starting with 7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a not found: ID does not exist" containerID="7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.064025 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a"} err="failed to get container status \"7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a\": rpc error: code = NotFound desc = could not find container \"7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a\": container with ID starting with 7bcbd668e62960a4b49a3da76016d53b382a502970f0f40990668b6d69c4548a not found: ID does not exist" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089156 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089185 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3d0e354-e762-4a61-b317-93aad016cc26-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3d0e354-e762-4a61-b317-93aad016cc26-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089252 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089275 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089293 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089326 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089350 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvlw2\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-kube-api-access-rvlw2\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089414 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2902652a-2f4f-4747-a5e8-7a665519ac85-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089428 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.089830 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.090075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.090390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.090624 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.090862 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3d0e354-e762-4a61-b317-93aad016cc26-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.091853 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.091884 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/838f9b06f8cd5713fa538049742b405736cab240358b8419d305ed933d790a4f/globalmount\"" pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.093030 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.093110 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.093119 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3d0e354-e762-4a61-b317-93aad016cc26-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.094634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3d0e354-e762-4a61-b317-93aad016cc26-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.109878 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvlw2\" (UniqueName: \"kubernetes.io/projected/c3d0e354-e762-4a61-b317-93aad016cc26-kube-api-access-rvlw2\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.128026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-96f50537-ee1c-44ca-a1a4-e9704662572a\") pod \"rabbitmq-server-0\" (UID: \"c3d0e354-e762-4a61-b317-93aad016cc26\") " pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.199895 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.204268 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.229283 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.230939 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.233662 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.233925 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.234607 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.235269 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qmq7h" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.236013 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.236300 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.237942 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.278536 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.382231 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392540 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392582 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ba1db3-cb42-437b-bc67-08521afcd2d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392626 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ba1db3-cb42-437b-bc67-08521afcd2d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2j2f\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-kube-api-access-h2j2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392712 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392749 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.392791 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494244 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494316 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494554 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ba1db3-cb42-437b-bc67-08521afcd2d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494639 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ba1db3-cb42-437b-bc67-08521afcd2d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2j2f\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-kube-api-access-h2j2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.494724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.495106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.496897 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.497298 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.498868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ba1db3-cb42-437b-bc67-08521afcd2d2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.498849 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.503831 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.503893 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a60e785c8d95b4dad54350c00907fea31a95f5b52fcb2fa89a57b6dd2b51921c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.504145 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.504571 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ba1db3-cb42-437b-bc67-08521afcd2d2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.507308 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ba1db3-cb42-437b-bc67-08521afcd2d2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.507708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.528911 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2j2f\" (UniqueName: \"kubernetes.io/projected/62ba1db3-cb42-437b-bc67-08521afcd2d2-kube-api-access-h2j2f\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.566299 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3a97cd9d-bc9c-46c4-a800-82e2fb555d8d\") pod \"rabbitmq-cell1-server-0\" (UID: \"62ba1db3-cb42-437b-bc67-08521afcd2d2\") " pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.597980 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:18 crc kubenswrapper[4958]: I1008 07:58:18.903708 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 08 07:58:18 crc kubenswrapper[4958]: W1008 07:58:18.914007 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d0e354_e762_4a61_b317_93aad016cc26.slice/crio-2f57794f779a7dc339e3b6ba8d9123a45640d3211cda8d39f4b7367ec8cd68da WatchSource:0}: Error finding container 2f57794f779a7dc339e3b6ba8d9123a45640d3211cda8d39f4b7367ec8cd68da: Status 404 returned error can't find the container with id 2f57794f779a7dc339e3b6ba8d9123a45640d3211cda8d39f4b7367ec8cd68da Oct 08 07:58:19 crc kubenswrapper[4958]: I1008 07:58:19.072475 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 08 07:58:19 crc kubenswrapper[4958]: W1008 07:58:19.091991 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ba1db3_cb42_437b_bc67_08521afcd2d2.slice/crio-1d6a62412eda5b87e61f20ea3f95bcc4060bd3de8d9c99dd238f702ac64f092c WatchSource:0}: Error finding container 1d6a62412eda5b87e61f20ea3f95bcc4060bd3de8d9c99dd238f702ac64f092c: Status 404 returned error can't find the container with id 1d6a62412eda5b87e61f20ea3f95bcc4060bd3de8d9c99dd238f702ac64f092c Oct 08 07:58:19 crc kubenswrapper[4958]: I1008 07:58:19.593741 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2902652a-2f4f-4747-a5e8-7a665519ac85" path="/var/lib/kubelet/pods/2902652a-2f4f-4747-a5e8-7a665519ac85/volumes" Oct 08 07:58:19 crc kubenswrapper[4958]: I1008 07:58:19.595237 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f820f217-c21a-4181-948d-39ed69fa35f5" path="/var/lib/kubelet/pods/f820f217-c21a-4181-948d-39ed69fa35f5/volumes" Oct 08 07:58:19 crc kubenswrapper[4958]: I1008 07:58:19.898196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62ba1db3-cb42-437b-bc67-08521afcd2d2","Type":"ContainerStarted","Data":"1d6a62412eda5b87e61f20ea3f95bcc4060bd3de8d9c99dd238f702ac64f092c"} Oct 08 07:58:19 crc kubenswrapper[4958]: I1008 07:58:19.903447 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3d0e354-e762-4a61-b317-93aad016cc26","Type":"ContainerStarted","Data":"2f57794f779a7dc339e3b6ba8d9123a45640d3211cda8d39f4b7367ec8cd68da"} Oct 08 07:58:21 crc kubenswrapper[4958]: I1008 07:58:21.929189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62ba1db3-cb42-437b-bc67-08521afcd2d2","Type":"ContainerStarted","Data":"391bfa2ba1fbb244d6cae15bb831ef0898b5b0a5443d99db6db47a1450ce4bc9"} Oct 08 07:58:21 crc kubenswrapper[4958]: I1008 07:58:21.931716 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3d0e354-e762-4a61-b317-93aad016cc26","Type":"ContainerStarted","Data":"cbf6a6698c1b2a85e9c952503f06d3aadae792669e6bd62a2a007304dfeb6834"} Oct 08 07:58:31 crc kubenswrapper[4958]: I1008 07:58:31.907179 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pg7h5"] Oct 08 07:58:31 crc kubenswrapper[4958]: I1008 07:58:31.910294 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:31 crc kubenswrapper[4958]: I1008 07:58:31.931069 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg7h5"] Oct 08 07:58:31 crc kubenswrapper[4958]: I1008 07:58:31.943183 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-catalog-content\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:31 crc kubenswrapper[4958]: I1008 07:58:31.943304 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gg9s\" (UniqueName: \"kubernetes.io/projected/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-kube-api-access-4gg9s\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:31 crc kubenswrapper[4958]: I1008 07:58:31.943371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-utilities\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.044914 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-catalog-content\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.045053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gg9s\" (UniqueName: \"kubernetes.io/projected/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-kube-api-access-4gg9s\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.045115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-utilities\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.045669 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-catalog-content\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.045897 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-utilities\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.077250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gg9s\" (UniqueName: \"kubernetes.io/projected/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-kube-api-access-4gg9s\") pod \"redhat-operators-pg7h5\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.245427 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:32 crc kubenswrapper[4958]: I1008 07:58:32.766874 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pg7h5"] Oct 08 07:58:33 crc kubenswrapper[4958]: I1008 07:58:33.041691 4958 generic.go:334] "Generic (PLEG): container finished" podID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerID="b53dfffa9d99394c659a8b4436db09e7a86ef82d0c6459b325c1892a0f48be66" exitCode=0 Oct 08 07:58:33 crc kubenswrapper[4958]: I1008 07:58:33.041912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerDied","Data":"b53dfffa9d99394c659a8b4436db09e7a86ef82d0c6459b325c1892a0f48be66"} Oct 08 07:58:33 crc kubenswrapper[4958]: I1008 07:58:33.041978 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerStarted","Data":"86a6245684199234b47fefa046c28e0d414022970ec4795a50c5fc439ccfcc91"} Oct 08 07:58:34 crc kubenswrapper[4958]: I1008 07:58:34.061300 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerStarted","Data":"7c952731e6f3acd7a0498ff96f2f14653e801aa30e96b72d12337066c225ac5b"} Oct 08 07:58:34 crc kubenswrapper[4958]: E1008 07:58:34.452577 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c6154fc_e09c_47c7_8050_cf2e0f6a9380.slice/crio-7c952731e6f3acd7a0498ff96f2f14653e801aa30e96b72d12337066c225ac5b.scope\": RecentStats: unable to find data in memory cache]" Oct 08 07:58:35 crc kubenswrapper[4958]: I1008 07:58:35.073281 4958 generic.go:334] "Generic (PLEG): container finished" podID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerID="7c952731e6f3acd7a0498ff96f2f14653e801aa30e96b72d12337066c225ac5b" exitCode=0 Oct 08 07:58:35 crc kubenswrapper[4958]: I1008 07:58:35.073397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerDied","Data":"7c952731e6f3acd7a0498ff96f2f14653e801aa30e96b72d12337066c225ac5b"} Oct 08 07:58:36 crc kubenswrapper[4958]: I1008 07:58:36.088753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerStarted","Data":"744cce88eb0d2597986ebbc718a36d0ccb903565c5e4149ce7f76c8d5240fdb1"} Oct 08 07:58:36 crc kubenswrapper[4958]: I1008 07:58:36.124772 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pg7h5" podStartSLOduration=2.672387526 podStartE2EDuration="5.124746439s" podCreationTimestamp="2025-10-08 07:58:31 +0000 UTC" firstStartedPulling="2025-10-08 07:58:33.043870108 +0000 UTC m=+5056.173562709" lastFinishedPulling="2025-10-08 07:58:35.496228981 +0000 UTC m=+5058.625921622" observedRunningTime="2025-10-08 07:58:36.116931968 +0000 UTC m=+5059.246624629" watchObservedRunningTime="2025-10-08 07:58:36.124746439 +0000 UTC m=+5059.254439070" Oct 08 07:58:36 crc kubenswrapper[4958]: I1008 07:58:36.845125 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:58:36 crc kubenswrapper[4958]: I1008 07:58:36.845226 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:58:42 crc kubenswrapper[4958]: I1008 07:58:42.246669 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:42 crc kubenswrapper[4958]: I1008 07:58:42.247386 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:42 crc kubenswrapper[4958]: I1008 07:58:42.327867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:43 crc kubenswrapper[4958]: I1008 07:58:43.227117 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:43 crc kubenswrapper[4958]: I1008 07:58:43.300638 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg7h5"] Oct 08 07:58:45 crc kubenswrapper[4958]: I1008 07:58:45.173668 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pg7h5" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="registry-server" containerID="cri-o://744cce88eb0d2597986ebbc718a36d0ccb903565c5e4149ce7f76c8d5240fdb1" gracePeriod=2 Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.199528 4958 generic.go:334] "Generic (PLEG): container finished" podID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerID="744cce88eb0d2597986ebbc718a36d0ccb903565c5e4149ce7f76c8d5240fdb1" exitCode=0 Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.199922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerDied","Data":"744cce88eb0d2597986ebbc718a36d0ccb903565c5e4149ce7f76c8d5240fdb1"} Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.294325 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.401237 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gg9s\" (UniqueName: \"kubernetes.io/projected/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-kube-api-access-4gg9s\") pod \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.401322 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-utilities\") pod \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.401393 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-catalog-content\") pod \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\" (UID: \"3c6154fc-e09c-47c7-8050-cf2e0f6a9380\") " Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.403188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-utilities" (OuterVolumeSpecName: "utilities") pod "3c6154fc-e09c-47c7-8050-cf2e0f6a9380" (UID: "3c6154fc-e09c-47c7-8050-cf2e0f6a9380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.412083 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-kube-api-access-4gg9s" (OuterVolumeSpecName: "kube-api-access-4gg9s") pod "3c6154fc-e09c-47c7-8050-cf2e0f6a9380" (UID: "3c6154fc-e09c-47c7-8050-cf2e0f6a9380"). InnerVolumeSpecName "kube-api-access-4gg9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.495812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c6154fc-e09c-47c7-8050-cf2e0f6a9380" (UID: "3c6154fc-e09c-47c7-8050-cf2e0f6a9380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.503420 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gg9s\" (UniqueName: \"kubernetes.io/projected/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-kube-api-access-4gg9s\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.503476 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:46 crc kubenswrapper[4958]: I1008 07:58:46.503495 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6154fc-e09c-47c7-8050-cf2e0f6a9380-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.214802 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pg7h5" event={"ID":"3c6154fc-e09c-47c7-8050-cf2e0f6a9380","Type":"ContainerDied","Data":"86a6245684199234b47fefa046c28e0d414022970ec4795a50c5fc439ccfcc91"} Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.216280 4958 scope.go:117] "RemoveContainer" containerID="744cce88eb0d2597986ebbc718a36d0ccb903565c5e4149ce7f76c8d5240fdb1" Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.215024 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pg7h5" Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.248579 4958 scope.go:117] "RemoveContainer" containerID="7c952731e6f3acd7a0498ff96f2f14653e801aa30e96b72d12337066c225ac5b" Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.283093 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pg7h5"] Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.287691 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pg7h5"] Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.299014 4958 scope.go:117] "RemoveContainer" containerID="b53dfffa9d99394c659a8b4436db09e7a86ef82d0c6459b325c1892a0f48be66" Oct 08 07:58:47 crc kubenswrapper[4958]: I1008 07:58:47.592822 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" path="/var/lib/kubelet/pods/3c6154fc-e09c-47c7-8050-cf2e0f6a9380/volumes" Oct 08 07:58:55 crc kubenswrapper[4958]: I1008 07:58:55.292613 4958 generic.go:334] "Generic (PLEG): container finished" podID="c3d0e354-e762-4a61-b317-93aad016cc26" containerID="cbf6a6698c1b2a85e9c952503f06d3aadae792669e6bd62a2a007304dfeb6834" exitCode=0 Oct 08 07:58:55 crc kubenswrapper[4958]: I1008 07:58:55.293465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3d0e354-e762-4a61-b317-93aad016cc26","Type":"ContainerDied","Data":"cbf6a6698c1b2a85e9c952503f06d3aadae792669e6bd62a2a007304dfeb6834"} Oct 08 07:58:56 crc kubenswrapper[4958]: I1008 07:58:56.304923 4958 generic.go:334] "Generic (PLEG): container finished" podID="62ba1db3-cb42-437b-bc67-08521afcd2d2" containerID="391bfa2ba1fbb244d6cae15bb831ef0898b5b0a5443d99db6db47a1450ce4bc9" exitCode=0 Oct 08 07:58:56 crc kubenswrapper[4958]: I1008 07:58:56.305023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62ba1db3-cb42-437b-bc67-08521afcd2d2","Type":"ContainerDied","Data":"391bfa2ba1fbb244d6cae15bb831ef0898b5b0a5443d99db6db47a1450ce4bc9"} Oct 08 07:58:56 crc kubenswrapper[4958]: I1008 07:58:56.308238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3d0e354-e762-4a61-b317-93aad016cc26","Type":"ContainerStarted","Data":"7035e35a0d20a85fe7956d3674a57643034113bab2e7c02ac866fb8019703aee"} Oct 08 07:58:56 crc kubenswrapper[4958]: I1008 07:58:56.308456 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 08 07:58:56 crc kubenswrapper[4958]: I1008 07:58:56.390986 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.390960598 podStartE2EDuration="39.390960598s" podCreationTimestamp="2025-10-08 07:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:58:56.378923402 +0000 UTC m=+5079.508616003" watchObservedRunningTime="2025-10-08 07:58:56.390960598 +0000 UTC m=+5079.520653199" Oct 08 07:58:57 crc kubenswrapper[4958]: I1008 07:58:57.317833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"62ba1db3-cb42-437b-bc67-08521afcd2d2","Type":"ContainerStarted","Data":"3e88e40ebd4047b34fc2c21113112dd475d67bc76413c821075dfefc3ac267e9"} Oct 08 07:58:57 crc kubenswrapper[4958]: I1008 07:58:57.318531 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:58:57 crc kubenswrapper[4958]: I1008 07:58:57.347275 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.347256202 podStartE2EDuration="39.347256202s" podCreationTimestamp="2025-10-08 07:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:58:57.342297568 +0000 UTC m=+5080.471990179" watchObservedRunningTime="2025-10-08 07:58:57.347256202 +0000 UTC m=+5080.476948803" Oct 08 07:59:06 crc kubenswrapper[4958]: I1008 07:59:06.844850 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 07:59:06 crc kubenswrapper[4958]: I1008 07:59:06.845234 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 07:59:06 crc kubenswrapper[4958]: I1008 07:59:06.845275 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 07:59:06 crc kubenswrapper[4958]: I1008 07:59:06.845816 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"986a687436ce24cd14a49af6d4c159b01d6d8dafeb34a2b2beac3221184d187e"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 07:59:06 crc kubenswrapper[4958]: I1008 07:59:06.845866 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://986a687436ce24cd14a49af6d4c159b01d6d8dafeb34a2b2beac3221184d187e" gracePeriod=600 Oct 08 07:59:07 crc kubenswrapper[4958]: I1008 07:59:07.399346 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="986a687436ce24cd14a49af6d4c159b01d6d8dafeb34a2b2beac3221184d187e" exitCode=0 Oct 08 07:59:07 crc kubenswrapper[4958]: I1008 07:59:07.399433 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"986a687436ce24cd14a49af6d4c159b01d6d8dafeb34a2b2beac3221184d187e"} Oct 08 07:59:07 crc kubenswrapper[4958]: I1008 07:59:07.399721 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217"} Oct 08 07:59:07 crc kubenswrapper[4958]: I1008 07:59:07.399771 4958 scope.go:117] "RemoveContainer" containerID="f42f5ac279d65291c2e09b2c5926d81d6a423bad5206455eb8a8f1be51416f7d" Oct 08 07:59:08 crc kubenswrapper[4958]: I1008 07:59:08.386405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 08 07:59:08 crc kubenswrapper[4958]: I1008 07:59:08.603210 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.093141 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 07:59:10 crc kubenswrapper[4958]: E1008 07:59:10.093833 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="extract-utilities" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.093849 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="extract-utilities" Oct 08 07:59:10 crc kubenswrapper[4958]: E1008 07:59:10.093870 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="registry-server" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.093879 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="registry-server" Oct 08 07:59:10 crc kubenswrapper[4958]: E1008 07:59:10.093894 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="extract-content" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.093902 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="extract-content" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.094146 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6154fc-e09c-47c7-8050-cf2e0f6a9380" containerName="registry-server" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.094705 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.100896 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.103251 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.215214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5zq\" (UniqueName: \"kubernetes.io/projected/f63cdef7-8180-48c9-83f1-6a0e21559f45-kube-api-access-pg5zq\") pod \"mariadb-client-1-default\" (UID: \"f63cdef7-8180-48c9-83f1-6a0e21559f45\") " pod="openstack/mariadb-client-1-default" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.317167 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5zq\" (UniqueName: \"kubernetes.io/projected/f63cdef7-8180-48c9-83f1-6a0e21559f45-kube-api-access-pg5zq\") pod \"mariadb-client-1-default\" (UID: \"f63cdef7-8180-48c9-83f1-6a0e21559f45\") " pod="openstack/mariadb-client-1-default" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.346416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5zq\" (UniqueName: \"kubernetes.io/projected/f63cdef7-8180-48c9-83f1-6a0e21559f45-kube-api-access-pg5zq\") pod \"mariadb-client-1-default\" (UID: \"f63cdef7-8180-48c9-83f1-6a0e21559f45\") " pod="openstack/mariadb-client-1-default" Oct 08 07:59:10 crc kubenswrapper[4958]: I1008 07:59:10.433522 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 07:59:11 crc kubenswrapper[4958]: I1008 07:59:11.157173 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 07:59:11 crc kubenswrapper[4958]: I1008 07:59:11.459762 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f63cdef7-8180-48c9-83f1-6a0e21559f45","Type":"ContainerStarted","Data":"b06c3fa9f738f2bbafec12e41326615042ac90ec70380526bcb4e1d916b87e3d"} Oct 08 07:59:12 crc kubenswrapper[4958]: I1008 07:59:12.475903 4958 generic.go:334] "Generic (PLEG): container finished" podID="f63cdef7-8180-48c9-83f1-6a0e21559f45" containerID="ed1b1ae0336a53d188e6777f1d14e0a1195bc42d0b29c87c412fa5db2167f079" exitCode=0 Oct 08 07:59:12 crc kubenswrapper[4958]: I1008 07:59:12.476006 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f63cdef7-8180-48c9-83f1-6a0e21559f45","Type":"ContainerDied","Data":"ed1b1ae0336a53d188e6777f1d14e0a1195bc42d0b29c87c412fa5db2167f079"} Oct 08 07:59:13 crc kubenswrapper[4958]: I1008 07:59:13.913883 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 07:59:13 crc kubenswrapper[4958]: I1008 07:59:13.952628 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_f63cdef7-8180-48c9-83f1-6a0e21559f45/mariadb-client-1-default/0.log" Oct 08 07:59:13 crc kubenswrapper[4958]: I1008 07:59:13.978169 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg5zq\" (UniqueName: \"kubernetes.io/projected/f63cdef7-8180-48c9-83f1-6a0e21559f45-kube-api-access-pg5zq\") pod \"f63cdef7-8180-48c9-83f1-6a0e21559f45\" (UID: \"f63cdef7-8180-48c9-83f1-6a0e21559f45\") " Oct 08 07:59:13 crc kubenswrapper[4958]: I1008 07:59:13.986520 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 07:59:13 crc kubenswrapper[4958]: I1008 07:59:13.989434 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63cdef7-8180-48c9-83f1-6a0e21559f45-kube-api-access-pg5zq" (OuterVolumeSpecName: "kube-api-access-pg5zq") pod "f63cdef7-8180-48c9-83f1-6a0e21559f45" (UID: "f63cdef7-8180-48c9-83f1-6a0e21559f45"). InnerVolumeSpecName "kube-api-access-pg5zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:13 crc kubenswrapper[4958]: I1008 07:59:13.992793 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.080560 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg5zq\" (UniqueName: \"kubernetes.io/projected/f63cdef7-8180-48c9-83f1-6a0e21559f45-kube-api-access-pg5zq\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.507686 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06c3fa9f738f2bbafec12e41326615042ac90ec70380526bcb4e1d916b87e3d" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.507767 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.582974 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 07:59:14 crc kubenswrapper[4958]: E1008 07:59:14.583639 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63cdef7-8180-48c9-83f1-6a0e21559f45" containerName="mariadb-client-1-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.583682 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63cdef7-8180-48c9-83f1-6a0e21559f45" containerName="mariadb-client-1-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.584257 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63cdef7-8180-48c9-83f1-6a0e21559f45" containerName="mariadb-client-1-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.585568 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.590461 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.598294 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.689087 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnbx\" (UniqueName: \"kubernetes.io/projected/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b-kube-api-access-xwnbx\") pod \"mariadb-client-2-default\" (UID: \"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b\") " pod="openstack/mariadb-client-2-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.791087 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnbx\" (UniqueName: \"kubernetes.io/projected/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b-kube-api-access-xwnbx\") pod \"mariadb-client-2-default\" (UID: \"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b\") " pod="openstack/mariadb-client-2-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.819538 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnbx\" (UniqueName: \"kubernetes.io/projected/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b-kube-api-access-xwnbx\") pod \"mariadb-client-2-default\" (UID: \"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b\") " pod="openstack/mariadb-client-2-default" Oct 08 07:59:14 crc kubenswrapper[4958]: I1008 07:59:14.905060 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 07:59:15 crc kubenswrapper[4958]: I1008 07:59:15.508253 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 07:59:15 crc kubenswrapper[4958]: W1008 07:59:15.511873 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod667c5a3c_8e9a_4c6d_bf23_9f7c98cb9f8b.slice/crio-55e1741d2744163c86c4067e68e53764822689001044542198fba1bcc8301ba6 WatchSource:0}: Error finding container 55e1741d2744163c86c4067e68e53764822689001044542198fba1bcc8301ba6: Status 404 returned error can't find the container with id 55e1741d2744163c86c4067e68e53764822689001044542198fba1bcc8301ba6 Oct 08 07:59:15 crc kubenswrapper[4958]: I1008 07:59:15.590037 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63cdef7-8180-48c9-83f1-6a0e21559f45" path="/var/lib/kubelet/pods/f63cdef7-8180-48c9-83f1-6a0e21559f45/volumes" Oct 08 07:59:16 crc kubenswrapper[4958]: I1008 07:59:16.528923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b","Type":"ContainerStarted","Data":"7be82dc085cd120f7e7bdfaaf5faddc5f6901601634936958b99c7b31a8b0a56"} Oct 08 07:59:16 crc kubenswrapper[4958]: I1008 07:59:16.529435 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b","Type":"ContainerStarted","Data":"55e1741d2744163c86c4067e68e53764822689001044542198fba1bcc8301ba6"} Oct 08 07:59:16 crc kubenswrapper[4958]: I1008 07:59:16.555007 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=2.554979251 podStartE2EDuration="2.554979251s" podCreationTimestamp="2025-10-08 07:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:59:16.548204518 +0000 UTC m=+5099.677897139" watchObservedRunningTime="2025-10-08 07:59:16.554979251 +0000 UTC m=+5099.684671892" Oct 08 07:59:16 crc kubenswrapper[4958]: I1008 07:59:16.638260 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b/mariadb-client-2-default/0.log" Oct 08 07:59:17 crc kubenswrapper[4958]: I1008 07:59:17.540890 4958 generic.go:334] "Generic (PLEG): container finished" podID="667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b" containerID="7be82dc085cd120f7e7bdfaaf5faddc5f6901601634936958b99c7b31a8b0a56" exitCode=0 Oct 08 07:59:17 crc kubenswrapper[4958]: I1008 07:59:17.540988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b","Type":"ContainerDied","Data":"7be82dc085cd120f7e7bdfaaf5faddc5f6901601634936958b99c7b31a8b0a56"} Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.070491 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.115101 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.124525 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.172235 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnbx\" (UniqueName: \"kubernetes.io/projected/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b-kube-api-access-xwnbx\") pod \"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b\" (UID: \"667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b\") " Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.180259 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b-kube-api-access-xwnbx" (OuterVolumeSpecName: "kube-api-access-xwnbx") pod "667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b" (UID: "667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b"). InnerVolumeSpecName "kube-api-access-xwnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.274684 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnbx\" (UniqueName: \"kubernetes.io/projected/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b-kube-api-access-xwnbx\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.559356 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e1741d2744163c86c4067e68e53764822689001044542198fba1bcc8301ba6" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.559387 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.584801 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b" path="/var/lib/kubelet/pods/667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b/volumes" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.733530 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Oct 08 07:59:19 crc kubenswrapper[4958]: E1008 07:59:19.734222 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b" containerName="mariadb-client-2-default" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.734255 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b" containerName="mariadb-client-2-default" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.734541 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="667c5a3c-8e9a-4c6d-bf23-9f7c98cb9f8b" containerName="mariadb-client-2-default" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.735338 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.738538 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.742429 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.883046 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zg9z\" (UniqueName: \"kubernetes.io/projected/db80e52e-e629-4996-a336-4c012050b16d-kube-api-access-6zg9z\") pod \"mariadb-client-1\" (UID: \"db80e52e-e629-4996-a336-4c012050b16d\") " pod="openstack/mariadb-client-1" Oct 08 07:59:19 crc kubenswrapper[4958]: I1008 07:59:19.984985 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zg9z\" (UniqueName: \"kubernetes.io/projected/db80e52e-e629-4996-a336-4c012050b16d-kube-api-access-6zg9z\") pod \"mariadb-client-1\" (UID: \"db80e52e-e629-4996-a336-4c012050b16d\") " pod="openstack/mariadb-client-1" Oct 08 07:59:20 crc kubenswrapper[4958]: I1008 07:59:20.006247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zg9z\" (UniqueName: \"kubernetes.io/projected/db80e52e-e629-4996-a336-4c012050b16d-kube-api-access-6zg9z\") pod \"mariadb-client-1\" (UID: \"db80e52e-e629-4996-a336-4c012050b16d\") " pod="openstack/mariadb-client-1" Oct 08 07:59:20 crc kubenswrapper[4958]: I1008 07:59:20.055337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 07:59:20 crc kubenswrapper[4958]: I1008 07:59:20.648130 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 07:59:20 crc kubenswrapper[4958]: W1008 07:59:20.658535 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb80e52e_e629_4996_a336_4c012050b16d.slice/crio-cfd75c13354a1d1b5c7a3960405427bacddac64bb8ae5a54acc3be44bb2c30f6 WatchSource:0}: Error finding container cfd75c13354a1d1b5c7a3960405427bacddac64bb8ae5a54acc3be44bb2c30f6: Status 404 returned error can't find the container with id cfd75c13354a1d1b5c7a3960405427bacddac64bb8ae5a54acc3be44bb2c30f6 Oct 08 07:59:21 crc kubenswrapper[4958]: I1008 07:59:21.583509 4958 generic.go:334] "Generic (PLEG): container finished" podID="db80e52e-e629-4996-a336-4c012050b16d" containerID="2c0a5a954df3eb78983b0eae5068936a17ce20d3aa0e56c74223aaebe4be64b4" exitCode=0 Oct 08 07:59:21 crc kubenswrapper[4958]: I1008 07:59:21.590036 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"db80e52e-e629-4996-a336-4c012050b16d","Type":"ContainerDied","Data":"2c0a5a954df3eb78983b0eae5068936a17ce20d3aa0e56c74223aaebe4be64b4"} Oct 08 07:59:21 crc kubenswrapper[4958]: I1008 07:59:21.590095 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"db80e52e-e629-4996-a336-4c012050b16d","Type":"ContainerStarted","Data":"cfd75c13354a1d1b5c7a3960405427bacddac64bb8ae5a54acc3be44bb2c30f6"} Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.063055 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.091556 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_db80e52e-e629-4996-a336-4c012050b16d/mariadb-client-1/0.log" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.123079 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.128576 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.238109 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zg9z\" (UniqueName: \"kubernetes.io/projected/db80e52e-e629-4996-a336-4c012050b16d-kube-api-access-6zg9z\") pod \"db80e52e-e629-4996-a336-4c012050b16d\" (UID: \"db80e52e-e629-4996-a336-4c012050b16d\") " Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.246334 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db80e52e-e629-4996-a336-4c012050b16d-kube-api-access-6zg9z" (OuterVolumeSpecName: "kube-api-access-6zg9z") pod "db80e52e-e629-4996-a336-4c012050b16d" (UID: "db80e52e-e629-4996-a336-4c012050b16d"). InnerVolumeSpecName "kube-api-access-6zg9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.340940 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zg9z\" (UniqueName: \"kubernetes.io/projected/db80e52e-e629-4996-a336-4c012050b16d-kube-api-access-6zg9z\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.595450 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db80e52e-e629-4996-a336-4c012050b16d" path="/var/lib/kubelet/pods/db80e52e-e629-4996-a336-4c012050b16d/volumes" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.603315 4958 scope.go:117] "RemoveContainer" containerID="2c0a5a954df3eb78983b0eae5068936a17ce20d3aa0e56c74223aaebe4be64b4" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.603387 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.648606 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 07:59:23 crc kubenswrapper[4958]: E1008 07:59:23.649222 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db80e52e-e629-4996-a336-4c012050b16d" containerName="mariadb-client-1" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.649262 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db80e52e-e629-4996-a336-4c012050b16d" containerName="mariadb-client-1" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.649602 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db80e52e-e629-4996-a336-4c012050b16d" containerName="mariadb-client-1" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.650482 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.653555 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.661623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.851625 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6gj\" (UniqueName: \"kubernetes.io/projected/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7-kube-api-access-fm6gj\") pod \"mariadb-client-4-default\" (UID: \"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7\") " pod="openstack/mariadb-client-4-default" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.955008 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6gj\" (UniqueName: \"kubernetes.io/projected/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7-kube-api-access-fm6gj\") pod \"mariadb-client-4-default\" (UID: \"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7\") " pod="openstack/mariadb-client-4-default" Oct 08 07:59:23 crc kubenswrapper[4958]: I1008 07:59:23.986840 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6gj\" (UniqueName: \"kubernetes.io/projected/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7-kube-api-access-fm6gj\") pod \"mariadb-client-4-default\" (UID: \"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7\") " pod="openstack/mariadb-client-4-default" Oct 08 07:59:24 crc kubenswrapper[4958]: I1008 07:59:24.000570 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 07:59:24 crc kubenswrapper[4958]: I1008 07:59:24.660580 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 07:59:24 crc kubenswrapper[4958]: W1008 07:59:24.664169 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcfcd4dd_b4e9_4229_bafe_038f97d8f4a7.slice/crio-5d433904914265fbf5923ecd35e4871a24ae8d9d94145ec67324aebd9849ace9 WatchSource:0}: Error finding container 5d433904914265fbf5923ecd35e4871a24ae8d9d94145ec67324aebd9849ace9: Status 404 returned error can't find the container with id 5d433904914265fbf5923ecd35e4871a24ae8d9d94145ec67324aebd9849ace9 Oct 08 07:59:25 crc kubenswrapper[4958]: I1008 07:59:25.649715 4958 generic.go:334] "Generic (PLEG): container finished" podID="dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7" containerID="2bbf3618d9b4d785a9114217748d77b0001d874ab36450d4c1c4728ea8c5e945" exitCode=0 Oct 08 07:59:25 crc kubenswrapper[4958]: I1008 07:59:25.649785 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7","Type":"ContainerDied","Data":"2bbf3618d9b4d785a9114217748d77b0001d874ab36450d4c1c4728ea8c5e945"} Oct 08 07:59:25 crc kubenswrapper[4958]: I1008 07:59:25.649830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7","Type":"ContainerStarted","Data":"5d433904914265fbf5923ecd35e4871a24ae8d9d94145ec67324aebd9849ace9"} Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.228964 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.251574 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7/mariadb-client-4-default/0.log" Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.280920 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.285204 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.423597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm6gj\" (UniqueName: \"kubernetes.io/projected/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7-kube-api-access-fm6gj\") pod \"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7\" (UID: \"dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7\") " Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.672405 4958 scope.go:117] "RemoveContainer" containerID="2bbf3618d9b4d785a9114217748d77b0001d874ab36450d4c1c4728ea8c5e945" Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.672463 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Oct 08 07:59:27 crc kubenswrapper[4958]: I1008 07:59:27.938364 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7-kube-api-access-fm6gj" (OuterVolumeSpecName: "kube-api-access-fm6gj") pod "dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7" (UID: "dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7"). InnerVolumeSpecName "kube-api-access-fm6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:28 crc kubenswrapper[4958]: I1008 07:59:28.036036 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm6gj\" (UniqueName: \"kubernetes.io/projected/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7-kube-api-access-fm6gj\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:29 crc kubenswrapper[4958]: I1008 07:59:29.594360 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7" path="/var/lib/kubelet/pods/dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7/volumes" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.198725 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 07:59:33 crc kubenswrapper[4958]: E1008 07:59:33.199795 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7" containerName="mariadb-client-4-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.199834 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7" containerName="mariadb-client-4-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.200263 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcfcd4dd-b4e9-4229-bafe-038f97d8f4a7" containerName="mariadb-client-4-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.201486 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.206045 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.217961 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.333781 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djf5w\" (UniqueName: \"kubernetes.io/projected/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f-kube-api-access-djf5w\") pod \"mariadb-client-5-default\" (UID: \"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f\") " pod="openstack/mariadb-client-5-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.436464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djf5w\" (UniqueName: \"kubernetes.io/projected/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f-kube-api-access-djf5w\") pod \"mariadb-client-5-default\" (UID: \"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f\") " pod="openstack/mariadb-client-5-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.468086 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djf5w\" (UniqueName: \"kubernetes.io/projected/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f-kube-api-access-djf5w\") pod \"mariadb-client-5-default\" (UID: \"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f\") " pod="openstack/mariadb-client-5-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.537101 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 07:59:33 crc kubenswrapper[4958]: I1008 07:59:33.939254 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 07:59:34 crc kubenswrapper[4958]: I1008 07:59:34.749030 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1bd6d0b-1d44-4380-9fa6-f704f3ce218f" containerID="13a1aa8c3cfa66b0d9456e96ce134885d148cb1268fecc0c65bc1a615fbe9f31" exitCode=0 Oct 08 07:59:34 crc kubenswrapper[4958]: I1008 07:59:34.749113 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f","Type":"ContainerDied","Data":"13a1aa8c3cfa66b0d9456e96ce134885d148cb1268fecc0c65bc1a615fbe9f31"} Oct 08 07:59:34 crc kubenswrapper[4958]: I1008 07:59:34.749168 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f","Type":"ContainerStarted","Data":"1ab742cfed1b6253e9b4fa195190e5525f9fbfddd71ddc07a12d024705c8c492"} Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.236716 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.266484 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_d1bd6d0b-1d44-4380-9fa6-f704f3ce218f/mariadb-client-5-default/0.log" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.302728 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.311983 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.390190 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djf5w\" (UniqueName: \"kubernetes.io/projected/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f-kube-api-access-djf5w\") pod \"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f\" (UID: \"d1bd6d0b-1d44-4380-9fa6-f704f3ce218f\") " Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.398572 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f-kube-api-access-djf5w" (OuterVolumeSpecName: "kube-api-access-djf5w") pod "d1bd6d0b-1d44-4380-9fa6-f704f3ce218f" (UID: "d1bd6d0b-1d44-4380-9fa6-f704f3ce218f"). InnerVolumeSpecName "kube-api-access-djf5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.496861 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 07:59:36 crc kubenswrapper[4958]: E1008 07:59:36.499696 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bd6d0b-1d44-4380-9fa6-f704f3ce218f" containerName="mariadb-client-5-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.499766 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bd6d0b-1d44-4380-9fa6-f704f3ce218f" containerName="mariadb-client-5-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.509322 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bd6d0b-1d44-4380-9fa6-f704f3ce218f" containerName="mariadb-client-5-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.511536 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.517313 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.528333 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djf5w\" (UniqueName: \"kubernetes.io/projected/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f-kube-api-access-djf5w\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.629252 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wzkr\" (UniqueName: \"kubernetes.io/projected/c20b9baf-0f89-4159-beea-30069c712282-kube-api-access-4wzkr\") pod \"mariadb-client-6-default\" (UID: \"c20b9baf-0f89-4159-beea-30069c712282\") " pod="openstack/mariadb-client-6-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.730377 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wzkr\" (UniqueName: \"kubernetes.io/projected/c20b9baf-0f89-4159-beea-30069c712282-kube-api-access-4wzkr\") pod \"mariadb-client-6-default\" (UID: \"c20b9baf-0f89-4159-beea-30069c712282\") " pod="openstack/mariadb-client-6-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.773890 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ab742cfed1b6253e9b4fa195190e5525f9fbfddd71ddc07a12d024705c8c492" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.774061 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.781879 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wzkr\" (UniqueName: \"kubernetes.io/projected/c20b9baf-0f89-4159-beea-30069c712282-kube-api-access-4wzkr\") pod \"mariadb-client-6-default\" (UID: \"c20b9baf-0f89-4159-beea-30069c712282\") " pod="openstack/mariadb-client-6-default" Oct 08 07:59:36 crc kubenswrapper[4958]: I1008 07:59:36.845337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 07:59:37 crc kubenswrapper[4958]: I1008 07:59:37.460732 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 07:59:37 crc kubenswrapper[4958]: I1008 07:59:37.588732 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bd6d0b-1d44-4380-9fa6-f704f3ce218f" path="/var/lib/kubelet/pods/d1bd6d0b-1d44-4380-9fa6-f704f3ce218f/volumes" Oct 08 07:59:37 crc kubenswrapper[4958]: I1008 07:59:37.783851 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"c20b9baf-0f89-4159-beea-30069c712282","Type":"ContainerStarted","Data":"ae411008c9b620916f8be4e004740b8186c0bad7e4e00784b032402cda9e4de5"} Oct 08 07:59:37 crc kubenswrapper[4958]: I1008 07:59:37.784388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"c20b9baf-0f89-4159-beea-30069c712282","Type":"ContainerStarted","Data":"3a303a851b1a0f385811535c5ee9b7eea0d977915a057289100ea8b49343b45f"} Oct 08 07:59:37 crc kubenswrapper[4958]: I1008 07:59:37.801801 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.80176968 podStartE2EDuration="1.80176968s" podCreationTimestamp="2025-10-08 07:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 07:59:37.799374805 +0000 UTC m=+5120.929067436" watchObservedRunningTime="2025-10-08 07:59:37.80176968 +0000 UTC m=+5120.931462311" Oct 08 07:59:38 crc kubenswrapper[4958]: I1008 07:59:38.796466 4958 generic.go:334] "Generic (PLEG): container finished" podID="c20b9baf-0f89-4159-beea-30069c712282" containerID="ae411008c9b620916f8be4e004740b8186c0bad7e4e00784b032402cda9e4de5" exitCode=0 Oct 08 07:59:38 crc kubenswrapper[4958]: I1008 07:59:38.796618 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"c20b9baf-0f89-4159-beea-30069c712282","Type":"ContainerDied","Data":"ae411008c9b620916f8be4e004740b8186c0bad7e4e00784b032402cda9e4de5"} Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.333272 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.378698 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.383801 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.399057 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wzkr\" (UniqueName: \"kubernetes.io/projected/c20b9baf-0f89-4159-beea-30069c712282-kube-api-access-4wzkr\") pod \"c20b9baf-0f89-4159-beea-30069c712282\" (UID: \"c20b9baf-0f89-4159-beea-30069c712282\") " Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.407319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20b9baf-0f89-4159-beea-30069c712282-kube-api-access-4wzkr" (OuterVolumeSpecName: "kube-api-access-4wzkr") pod "c20b9baf-0f89-4159-beea-30069c712282" (UID: "c20b9baf-0f89-4159-beea-30069c712282"). InnerVolumeSpecName "kube-api-access-4wzkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.501113 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wzkr\" (UniqueName: \"kubernetes.io/projected/c20b9baf-0f89-4159-beea-30069c712282-kube-api-access-4wzkr\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.588909 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 07:59:40 crc kubenswrapper[4958]: E1008 07:59:40.589326 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20b9baf-0f89-4159-beea-30069c712282" containerName="mariadb-client-6-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.589341 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20b9baf-0f89-4159-beea-30069c712282" containerName="mariadb-client-6-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.589556 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20b9baf-0f89-4159-beea-30069c712282" containerName="mariadb-client-6-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.590092 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.597991 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.704491 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5n9j\" (UniqueName: \"kubernetes.io/projected/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7-kube-api-access-z5n9j\") pod \"mariadb-client-7-default\" (UID: \"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7\") " pod="openstack/mariadb-client-7-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.806574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5n9j\" (UniqueName: \"kubernetes.io/projected/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7-kube-api-access-z5n9j\") pod \"mariadb-client-7-default\" (UID: \"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7\") " pod="openstack/mariadb-client-7-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.823659 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a303a851b1a0f385811535c5ee9b7eea0d977915a057289100ea8b49343b45f" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.823737 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.836036 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5n9j\" (UniqueName: \"kubernetes.io/projected/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7-kube-api-access-z5n9j\") pod \"mariadb-client-7-default\" (UID: \"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7\") " pod="openstack/mariadb-client-7-default" Oct 08 07:59:40 crc kubenswrapper[4958]: I1008 07:59:40.913435 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 07:59:41 crc kubenswrapper[4958]: I1008 07:59:41.286014 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 07:59:41 crc kubenswrapper[4958]: W1008 07:59:41.292552 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dc0a20e_5c04_4ace_b4a2_c3ad4d1fc6b7.slice/crio-bbd5f7e8ac7463ac866e7a494896c41480c1f9c1e335ec13b2bec91f597b357e WatchSource:0}: Error finding container bbd5f7e8ac7463ac866e7a494896c41480c1f9c1e335ec13b2bec91f597b357e: Status 404 returned error can't find the container with id bbd5f7e8ac7463ac866e7a494896c41480c1f9c1e335ec13b2bec91f597b357e Oct 08 07:59:41 crc kubenswrapper[4958]: I1008 07:59:41.589348 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20b9baf-0f89-4159-beea-30069c712282" path="/var/lib/kubelet/pods/c20b9baf-0f89-4159-beea-30069c712282/volumes" Oct 08 07:59:41 crc kubenswrapper[4958]: I1008 07:59:41.839926 4958 generic.go:334] "Generic (PLEG): container finished" podID="4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7" containerID="83229e70edfbf194a7981ed7ecdfe2a130329b99136e358fe5f7cfa88801e99a" exitCode=0 Oct 08 07:59:41 crc kubenswrapper[4958]: I1008 07:59:41.840063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7","Type":"ContainerDied","Data":"83229e70edfbf194a7981ed7ecdfe2a130329b99136e358fe5f7cfa88801e99a"} Oct 08 07:59:41 crc kubenswrapper[4958]: I1008 07:59:41.840132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7","Type":"ContainerStarted","Data":"bbd5f7e8ac7463ac866e7a494896c41480c1f9c1e335ec13b2bec91f597b357e"} Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.506377 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.534821 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7/mariadb-client-7-default/0.log" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.548360 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5n9j\" (UniqueName: \"kubernetes.io/projected/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7-kube-api-access-z5n9j\") pod \"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7\" (UID: \"4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7\") " Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.563248 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7-kube-api-access-z5n9j" (OuterVolumeSpecName: "kube-api-access-z5n9j") pod "4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7" (UID: "4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7"). InnerVolumeSpecName "kube-api-access-z5n9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.573066 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.595112 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.650754 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5n9j\" (UniqueName: \"kubernetes.io/projected/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7-kube-api-access-z5n9j\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.795346 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Oct 08 07:59:43 crc kubenswrapper[4958]: E1008 07:59:43.795650 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7" containerName="mariadb-client-7-default" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.795664 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7" containerName="mariadb-client-7-default" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.795815 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7" containerName="mariadb-client-7-default" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.796297 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.809256 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.854406 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92v7k\" (UniqueName: \"kubernetes.io/projected/7387c211-e1d9-4e8a-8d16-49dd5c411eae-kube-api-access-92v7k\") pod \"mariadb-client-2\" (UID: \"7387c211-e1d9-4e8a-8d16-49dd5c411eae\") " pod="openstack/mariadb-client-2" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.864360 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd5f7e8ac7463ac866e7a494896c41480c1f9c1e335ec13b2bec91f597b357e" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.864465 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.955661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92v7k\" (UniqueName: \"kubernetes.io/projected/7387c211-e1d9-4e8a-8d16-49dd5c411eae-kube-api-access-92v7k\") pod \"mariadb-client-2\" (UID: \"7387c211-e1d9-4e8a-8d16-49dd5c411eae\") " pod="openstack/mariadb-client-2" Oct 08 07:59:43 crc kubenswrapper[4958]: I1008 07:59:43.982846 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92v7k\" (UniqueName: \"kubernetes.io/projected/7387c211-e1d9-4e8a-8d16-49dd5c411eae-kube-api-access-92v7k\") pod \"mariadb-client-2\" (UID: \"7387c211-e1d9-4e8a-8d16-49dd5c411eae\") " pod="openstack/mariadb-client-2" Oct 08 07:59:44 crc kubenswrapper[4958]: I1008 07:59:44.117037 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 07:59:44 crc kubenswrapper[4958]: I1008 07:59:44.486623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 07:59:45 crc kubenswrapper[4958]: W1008 07:59:45.037302 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7387c211_e1d9_4e8a_8d16_49dd5c411eae.slice/crio-5a9b1564b1877ae626c89ad78180ee1238c6b123ba9b8021b4d8fb6943852548 WatchSource:0}: Error finding container 5a9b1564b1877ae626c89ad78180ee1238c6b123ba9b8021b4d8fb6943852548: Status 404 returned error can't find the container with id 5a9b1564b1877ae626c89ad78180ee1238c6b123ba9b8021b4d8fb6943852548 Oct 08 07:59:45 crc kubenswrapper[4958]: I1008 07:59:45.600334 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7" path="/var/lib/kubelet/pods/4dc0a20e-5c04-4ace-b4a2-c3ad4d1fc6b7/volumes" Oct 08 07:59:45 crc kubenswrapper[4958]: I1008 07:59:45.890535 4958 generic.go:334] "Generic (PLEG): container finished" podID="7387c211-e1d9-4e8a-8d16-49dd5c411eae" containerID="2d3ec60257d0b60f6c4039051bd92d3cf098b85046dac1bfdd2cac19ea64e1ef" exitCode=0 Oct 08 07:59:45 crc kubenswrapper[4958]: I1008 07:59:45.890582 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"7387c211-e1d9-4e8a-8d16-49dd5c411eae","Type":"ContainerDied","Data":"2d3ec60257d0b60f6c4039051bd92d3cf098b85046dac1bfdd2cac19ea64e1ef"} Oct 08 07:59:45 crc kubenswrapper[4958]: I1008 07:59:45.890613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"7387c211-e1d9-4e8a-8d16-49dd5c411eae","Type":"ContainerStarted","Data":"5a9b1564b1877ae626c89ad78180ee1238c6b123ba9b8021b4d8fb6943852548"} Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.346935 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.369772 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_7387c211-e1d9-4e8a-8d16-49dd5c411eae/mariadb-client-2/0.log" Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.402662 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.411500 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.433428 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92v7k\" (UniqueName: \"kubernetes.io/projected/7387c211-e1d9-4e8a-8d16-49dd5c411eae-kube-api-access-92v7k\") pod \"7387c211-e1d9-4e8a-8d16-49dd5c411eae\" (UID: \"7387c211-e1d9-4e8a-8d16-49dd5c411eae\") " Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.438548 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7387c211-e1d9-4e8a-8d16-49dd5c411eae-kube-api-access-92v7k" (OuterVolumeSpecName: "kube-api-access-92v7k") pod "7387c211-e1d9-4e8a-8d16-49dd5c411eae" (UID: "7387c211-e1d9-4e8a-8d16-49dd5c411eae"). InnerVolumeSpecName "kube-api-access-92v7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.535274 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92v7k\" (UniqueName: \"kubernetes.io/projected/7387c211-e1d9-4e8a-8d16-49dd5c411eae-kube-api-access-92v7k\") on node \"crc\" DevicePath \"\"" Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.595941 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7387c211-e1d9-4e8a-8d16-49dd5c411eae" path="/var/lib/kubelet/pods/7387c211-e1d9-4e8a-8d16-49dd5c411eae/volumes" Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.911167 4958 scope.go:117] "RemoveContainer" containerID="2d3ec60257d0b60f6c4039051bd92d3cf098b85046dac1bfdd2cac19ea64e1ef" Oct 08 07:59:47 crc kubenswrapper[4958]: I1008 07:59:47.911291 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Oct 08 07:59:53 crc kubenswrapper[4958]: I1008 07:59:53.622050 4958 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddb80e52e-e629-4996-a336-4c012050b16d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddb80e52e-e629-4996-a336-4c012050b16d] : Timed out while waiting for systemd to remove kubepods-besteffort-poddb80e52e_e629_4996_a336_4c012050b16d.slice" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.164601 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd"] Oct 08 08:00:00 crc kubenswrapper[4958]: E1008 08:00:00.165693 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7387c211-e1d9-4e8a-8d16-49dd5c411eae" containerName="mariadb-client-2" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.165715 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7387c211-e1d9-4e8a-8d16-49dd5c411eae" containerName="mariadb-client-2" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.166053 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7387c211-e1d9-4e8a-8d16-49dd5c411eae" containerName="mariadb-client-2" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.166817 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.169506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.172424 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.190580 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd"] Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.292260 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7bf775e-4caf-4e56-bdae-82da59751def-secret-volume\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.292311 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bf775e-4caf-4e56-bdae-82da59751def-config-volume\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.292347 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hgqx\" (UniqueName: \"kubernetes.io/projected/c7bf775e-4caf-4e56-bdae-82da59751def-kube-api-access-8hgqx\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.393448 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7bf775e-4caf-4e56-bdae-82da59751def-secret-volume\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.393828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bf775e-4caf-4e56-bdae-82da59751def-config-volume\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.394052 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hgqx\" (UniqueName: \"kubernetes.io/projected/c7bf775e-4caf-4e56-bdae-82da59751def-kube-api-access-8hgqx\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.394982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bf775e-4caf-4e56-bdae-82da59751def-config-volume\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.399910 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7bf775e-4caf-4e56-bdae-82da59751def-secret-volume\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.413221 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hgqx\" (UniqueName: \"kubernetes.io/projected/c7bf775e-4caf-4e56-bdae-82da59751def-kube-api-access-8hgqx\") pod \"collect-profiles-29331840-hwwjd\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:00 crc kubenswrapper[4958]: I1008 08:00:00.498249 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:01 crc kubenswrapper[4958]: I1008 08:00:01.039877 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd"] Oct 08 08:00:02 crc kubenswrapper[4958]: I1008 08:00:02.050937 4958 generic.go:334] "Generic (PLEG): container finished" podID="c7bf775e-4caf-4e56-bdae-82da59751def" containerID="ff654a90469f9cc0217471bdb33e28289aa7756e894283eb10ee309c1923831f" exitCode=0 Oct 08 08:00:02 crc kubenswrapper[4958]: I1008 08:00:02.051018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" event={"ID":"c7bf775e-4caf-4e56-bdae-82da59751def","Type":"ContainerDied","Data":"ff654a90469f9cc0217471bdb33e28289aa7756e894283eb10ee309c1923831f"} Oct 08 08:00:02 crc kubenswrapper[4958]: I1008 08:00:02.051357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" event={"ID":"c7bf775e-4caf-4e56-bdae-82da59751def","Type":"ContainerStarted","Data":"acb4639cc42bf0d275463b1a9b6b014b0190bb153ae13b45b7fd2c2492addc8c"} Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.426666 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.545593 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7bf775e-4caf-4e56-bdae-82da59751def-secret-volume\") pod \"c7bf775e-4caf-4e56-bdae-82da59751def\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.545754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bf775e-4caf-4e56-bdae-82da59751def-config-volume\") pod \"c7bf775e-4caf-4e56-bdae-82da59751def\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.545780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hgqx\" (UniqueName: \"kubernetes.io/projected/c7bf775e-4caf-4e56-bdae-82da59751def-kube-api-access-8hgqx\") pod \"c7bf775e-4caf-4e56-bdae-82da59751def\" (UID: \"c7bf775e-4caf-4e56-bdae-82da59751def\") " Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.546608 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7bf775e-4caf-4e56-bdae-82da59751def-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7bf775e-4caf-4e56-bdae-82da59751def" (UID: "c7bf775e-4caf-4e56-bdae-82da59751def"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.552237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bf775e-4caf-4e56-bdae-82da59751def-kube-api-access-8hgqx" (OuterVolumeSpecName: "kube-api-access-8hgqx") pod "c7bf775e-4caf-4e56-bdae-82da59751def" (UID: "c7bf775e-4caf-4e56-bdae-82da59751def"). InnerVolumeSpecName "kube-api-access-8hgqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.553182 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bf775e-4caf-4e56-bdae-82da59751def-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7bf775e-4caf-4e56-bdae-82da59751def" (UID: "c7bf775e-4caf-4e56-bdae-82da59751def"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.648334 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7bf775e-4caf-4e56-bdae-82da59751def-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.648377 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7bf775e-4caf-4e56-bdae-82da59751def-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:00:03 crc kubenswrapper[4958]: I1008 08:00:03.648390 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hgqx\" (UniqueName: \"kubernetes.io/projected/c7bf775e-4caf-4e56-bdae-82da59751def-kube-api-access-8hgqx\") on node \"crc\" DevicePath \"\"" Oct 08 08:00:04 crc kubenswrapper[4958]: I1008 08:00:04.070317 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" event={"ID":"c7bf775e-4caf-4e56-bdae-82da59751def","Type":"ContainerDied","Data":"acb4639cc42bf0d275463b1a9b6b014b0190bb153ae13b45b7fd2c2492addc8c"} Oct 08 08:00:04 crc kubenswrapper[4958]: I1008 08:00:04.070661 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb4639cc42bf0d275463b1a9b6b014b0190bb153ae13b45b7fd2c2492addc8c" Oct 08 08:00:04 crc kubenswrapper[4958]: I1008 08:00:04.070361 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd" Oct 08 08:00:04 crc kubenswrapper[4958]: I1008 08:00:04.508185 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n"] Oct 08 08:00:04 crc kubenswrapper[4958]: I1008 08:00:04.515956 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331795-tpf4n"] Oct 08 08:00:05 crc kubenswrapper[4958]: I1008 08:00:05.602454 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb50bf43-0d69-496f-b896-5d72a9c84664" path="/var/lib/kubelet/pods/eb50bf43-0d69-496f-b896-5d72a9c84664/volumes" Oct 08 08:00:38 crc kubenswrapper[4958]: I1008 08:00:38.201769 4958 scope.go:117] "RemoveContainer" containerID="7929edf725af568b1918d3547517e53924ac4da82754060a0cefdd275c0152d5" Oct 08 08:01:36 crc kubenswrapper[4958]: I1008 08:01:36.845535 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:01:36 crc kubenswrapper[4958]: I1008 08:01:36.846283 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:01:38 crc kubenswrapper[4958]: I1008 08:01:38.307098 4958 scope.go:117] "RemoveContainer" containerID="e42552095dcd292396d4e886243b22f8e7328f6699b9f9d30ac9c6dfa69e1b90" Oct 08 08:02:06 crc kubenswrapper[4958]: I1008 08:02:06.845316 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:02:06 crc kubenswrapper[4958]: I1008 08:02:06.846186 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.845276 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.845850 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.845912 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.846851 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.846931 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" gracePeriod=600 Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.879940 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lgpvt"] Oct 08 08:02:36 crc kubenswrapper[4958]: E1008 08:02:36.881343 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bf775e-4caf-4e56-bdae-82da59751def" containerName="collect-profiles" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.881384 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bf775e-4caf-4e56-bdae-82da59751def" containerName="collect-profiles" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.881643 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bf775e-4caf-4e56-bdae-82da59751def" containerName="collect-profiles" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.883831 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:36 crc kubenswrapper[4958]: I1008 08:02:36.896331 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgpvt"] Oct 08 08:02:36 crc kubenswrapper[4958]: E1008 08:02:36.979636 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.021224 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-utilities\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.021284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmnb\" (UniqueName: \"kubernetes.io/projected/3304b31e-226c-41aa-83af-4458ca5e933c-kube-api-access-jdmnb\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.021922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-catalog-content\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.123058 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-utilities\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.123112 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdmnb\" (UniqueName: \"kubernetes.io/projected/3304b31e-226c-41aa-83af-4458ca5e933c-kube-api-access-jdmnb\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.123146 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-catalog-content\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.123583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-utilities\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.123615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-catalog-content\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.140572 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdmnb\" (UniqueName: \"kubernetes.io/projected/3304b31e-226c-41aa-83af-4458ca5e933c-kube-api-access-jdmnb\") pod \"redhat-marketplace-lgpvt\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.219481 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.709263 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgpvt"] Oct 08 08:02:37 crc kubenswrapper[4958]: W1008 08:02:37.716164 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3304b31e_226c_41aa_83af_4458ca5e933c.slice/crio-5889525be0ed18270543cb7dc33ff1cf113e276ed603f38b051ddf3fde1ab9b1 WatchSource:0}: Error finding container 5889525be0ed18270543cb7dc33ff1cf113e276ed603f38b051ddf3fde1ab9b1: Status 404 returned error can't find the container with id 5889525be0ed18270543cb7dc33ff1cf113e276ed603f38b051ddf3fde1ab9b1 Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.762143 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" exitCode=0 Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.762201 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217"} Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.762455 4958 scope.go:117] "RemoveContainer" containerID="986a687436ce24cd14a49af6d4c159b01d6d8dafeb34a2b2beac3221184d187e" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.763287 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:02:37 crc kubenswrapper[4958]: E1008 08:02:37.763806 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:02:37 crc kubenswrapper[4958]: I1008 08:02:37.763963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgpvt" event={"ID":"3304b31e-226c-41aa-83af-4458ca5e933c","Type":"ContainerStarted","Data":"5889525be0ed18270543cb7dc33ff1cf113e276ed603f38b051ddf3fde1ab9b1"} Oct 08 08:02:38 crc kubenswrapper[4958]: I1008 08:02:38.776900 4958 generic.go:334] "Generic (PLEG): container finished" podID="3304b31e-226c-41aa-83af-4458ca5e933c" containerID="6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4" exitCode=0 Oct 08 08:02:38 crc kubenswrapper[4958]: I1008 08:02:38.776992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgpvt" event={"ID":"3304b31e-226c-41aa-83af-4458ca5e933c","Type":"ContainerDied","Data":"6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4"} Oct 08 08:02:38 crc kubenswrapper[4958]: I1008 08:02:38.780141 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:02:39 crc kubenswrapper[4958]: I1008 08:02:39.796523 4958 generic.go:334] "Generic (PLEG): container finished" podID="3304b31e-226c-41aa-83af-4458ca5e933c" containerID="2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd" exitCode=0 Oct 08 08:02:39 crc kubenswrapper[4958]: I1008 08:02:39.796616 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgpvt" event={"ID":"3304b31e-226c-41aa-83af-4458ca5e933c","Type":"ContainerDied","Data":"2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd"} Oct 08 08:02:40 crc kubenswrapper[4958]: I1008 08:02:40.808349 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgpvt" event={"ID":"3304b31e-226c-41aa-83af-4458ca5e933c","Type":"ContainerStarted","Data":"b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267"} Oct 08 08:02:40 crc kubenswrapper[4958]: I1008 08:02:40.831082 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lgpvt" podStartSLOduration=3.167000686 podStartE2EDuration="4.831062373s" podCreationTimestamp="2025-10-08 08:02:36 +0000 UTC" firstStartedPulling="2025-10-08 08:02:38.779704009 +0000 UTC m=+5301.909396620" lastFinishedPulling="2025-10-08 08:02:40.443765706 +0000 UTC m=+5303.573458307" observedRunningTime="2025-10-08 08:02:40.826688275 +0000 UTC m=+5303.956380946" watchObservedRunningTime="2025-10-08 08:02:40.831062373 +0000 UTC m=+5303.960754974" Oct 08 08:02:47 crc kubenswrapper[4958]: I1008 08:02:47.220712 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:47 crc kubenswrapper[4958]: I1008 08:02:47.221675 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:47 crc kubenswrapper[4958]: I1008 08:02:47.299908 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:47 crc kubenswrapper[4958]: I1008 08:02:47.950611 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:48 crc kubenswrapper[4958]: I1008 08:02:48.022039 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgpvt"] Oct 08 08:02:49 crc kubenswrapper[4958]: I1008 08:02:49.901775 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lgpvt" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="registry-server" containerID="cri-o://b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267" gracePeriod=2 Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.425452 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.594532 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-utilities\") pod \"3304b31e-226c-41aa-83af-4458ca5e933c\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.594653 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdmnb\" (UniqueName: \"kubernetes.io/projected/3304b31e-226c-41aa-83af-4458ca5e933c-kube-api-access-jdmnb\") pod \"3304b31e-226c-41aa-83af-4458ca5e933c\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.594708 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-catalog-content\") pod \"3304b31e-226c-41aa-83af-4458ca5e933c\" (UID: \"3304b31e-226c-41aa-83af-4458ca5e933c\") " Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.595793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-utilities" (OuterVolumeSpecName: "utilities") pod "3304b31e-226c-41aa-83af-4458ca5e933c" (UID: "3304b31e-226c-41aa-83af-4458ca5e933c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.609270 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3304b31e-226c-41aa-83af-4458ca5e933c-kube-api-access-jdmnb" (OuterVolumeSpecName: "kube-api-access-jdmnb") pod "3304b31e-226c-41aa-83af-4458ca5e933c" (UID: "3304b31e-226c-41aa-83af-4458ca5e933c"). InnerVolumeSpecName "kube-api-access-jdmnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.611411 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3304b31e-226c-41aa-83af-4458ca5e933c" (UID: "3304b31e-226c-41aa-83af-4458ca5e933c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.696755 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdmnb\" (UniqueName: \"kubernetes.io/projected/3304b31e-226c-41aa-83af-4458ca5e933c-kube-api-access-jdmnb\") on node \"crc\" DevicePath \"\"" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.698249 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.698283 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3304b31e-226c-41aa-83af-4458ca5e933c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.921265 4958 generic.go:334] "Generic (PLEG): container finished" podID="3304b31e-226c-41aa-83af-4458ca5e933c" containerID="b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267" exitCode=0 Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.921314 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgpvt" event={"ID":"3304b31e-226c-41aa-83af-4458ca5e933c","Type":"ContainerDied","Data":"b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267"} Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.921348 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgpvt" event={"ID":"3304b31e-226c-41aa-83af-4458ca5e933c","Type":"ContainerDied","Data":"5889525be0ed18270543cb7dc33ff1cf113e276ed603f38b051ddf3fde1ab9b1"} Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.921372 4958 scope.go:117] "RemoveContainer" containerID="b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.921379 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgpvt" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.958116 4958 scope.go:117] "RemoveContainer" containerID="2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd" Oct 08 08:02:50 crc kubenswrapper[4958]: I1008 08:02:50.995307 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgpvt"] Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.001248 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgpvt"] Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.009668 4958 scope.go:117] "RemoveContainer" containerID="6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.033505 4958 scope.go:117] "RemoveContainer" containerID="b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267" Oct 08 08:02:51 crc kubenswrapper[4958]: E1008 08:02:51.034234 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267\": container with ID starting with b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267 not found: ID does not exist" containerID="b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.034272 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267"} err="failed to get container status \"b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267\": rpc error: code = NotFound desc = could not find container \"b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267\": container with ID starting with b49b76519e39ac8ac5a5eb9ea4a18200bf84efaae655dc0664a9062606459267 not found: ID does not exist" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.034301 4958 scope.go:117] "RemoveContainer" containerID="2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd" Oct 08 08:02:51 crc kubenswrapper[4958]: E1008 08:02:51.034682 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd\": container with ID starting with 2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd not found: ID does not exist" containerID="2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.034735 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd"} err="failed to get container status \"2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd\": rpc error: code = NotFound desc = could not find container \"2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd\": container with ID starting with 2052be23ef652e8680e469741265de52b8acd41ae82335261b1ce02a93baf7cd not found: ID does not exist" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.034774 4958 scope.go:117] "RemoveContainer" containerID="6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4" Oct 08 08:02:51 crc kubenswrapper[4958]: E1008 08:02:51.035195 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4\": container with ID starting with 6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4 not found: ID does not exist" containerID="6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.035236 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4"} err="failed to get container status \"6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4\": rpc error: code = NotFound desc = could not find container \"6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4\": container with ID starting with 6462e45ea9610eae74c0ff5f25fb89f714d7110c2e4609d09fb9fc83affa45a4 not found: ID does not exist" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.577255 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:02:51 crc kubenswrapper[4958]: E1008 08:02:51.578151 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:02:51 crc kubenswrapper[4958]: I1008 08:02:51.595023 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" path="/var/lib/kubelet/pods/3304b31e-226c-41aa-83af-4458ca5e933c/volumes" Oct 08 08:03:03 crc kubenswrapper[4958]: I1008 08:03:03.577686 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:03:03 crc kubenswrapper[4958]: E1008 08:03:03.578685 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.221040 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-555r5"] Oct 08 08:03:12 crc kubenswrapper[4958]: E1008 08:03:12.222162 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="registry-server" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.222187 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="registry-server" Oct 08 08:03:12 crc kubenswrapper[4958]: E1008 08:03:12.222205 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="extract-content" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.222217 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="extract-content" Oct 08 08:03:12 crc kubenswrapper[4958]: E1008 08:03:12.222238 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="extract-utilities" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.222248 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="extract-utilities" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.222540 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3304b31e-226c-41aa-83af-4458ca5e933c" containerName="registry-server" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.224220 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.248333 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-555r5"] Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.337500 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-catalog-content\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.337881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-utilities\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.338041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drjdv\" (UniqueName: \"kubernetes.io/projected/586dd8f0-caa2-425a-9f8c-d4d2ac809277-kube-api-access-drjdv\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.439733 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-utilities\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.439823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drjdv\" (UniqueName: \"kubernetes.io/projected/586dd8f0-caa2-425a-9f8c-d4d2ac809277-kube-api-access-drjdv\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.439880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-catalog-content\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.440822 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-catalog-content\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.440827 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-utilities\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.480403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drjdv\" (UniqueName: \"kubernetes.io/projected/586dd8f0-caa2-425a-9f8c-d4d2ac809277-kube-api-access-drjdv\") pod \"certified-operators-555r5\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:12 crc kubenswrapper[4958]: I1008 08:03:12.559830 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:13 crc kubenswrapper[4958]: I1008 08:03:13.029652 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-555r5"] Oct 08 08:03:13 crc kubenswrapper[4958]: I1008 08:03:13.165919 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-555r5" event={"ID":"586dd8f0-caa2-425a-9f8c-d4d2ac809277","Type":"ContainerStarted","Data":"f7ebf2ccf955b8132f99910c4f270f080ee0e4964a14fed8ae1f7de4de6dc381"} Oct 08 08:03:14 crc kubenswrapper[4958]: I1008 08:03:14.190718 4958 generic.go:334] "Generic (PLEG): container finished" podID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerID="f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931" exitCode=0 Oct 08 08:03:14 crc kubenswrapper[4958]: I1008 08:03:14.190828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-555r5" event={"ID":"586dd8f0-caa2-425a-9f8c-d4d2ac809277","Type":"ContainerDied","Data":"f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931"} Oct 08 08:03:15 crc kubenswrapper[4958]: I1008 08:03:15.205381 4958 generic.go:334] "Generic (PLEG): container finished" podID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerID="5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0" exitCode=0 Oct 08 08:03:15 crc kubenswrapper[4958]: I1008 08:03:15.205495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-555r5" event={"ID":"586dd8f0-caa2-425a-9f8c-d4d2ac809277","Type":"ContainerDied","Data":"5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0"} Oct 08 08:03:15 crc kubenswrapper[4958]: I1008 08:03:15.577238 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:03:15 crc kubenswrapper[4958]: E1008 08:03:15.577614 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:03:16 crc kubenswrapper[4958]: I1008 08:03:16.219694 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-555r5" event={"ID":"586dd8f0-caa2-425a-9f8c-d4d2ac809277","Type":"ContainerStarted","Data":"e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b"} Oct 08 08:03:16 crc kubenswrapper[4958]: I1008 08:03:16.253313 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-555r5" podStartSLOduration=2.661483151 podStartE2EDuration="4.253288273s" podCreationTimestamp="2025-10-08 08:03:12 +0000 UTC" firstStartedPulling="2025-10-08 08:03:14.193090159 +0000 UTC m=+5337.322782800" lastFinishedPulling="2025-10-08 08:03:15.784895311 +0000 UTC m=+5338.914587922" observedRunningTime="2025-10-08 08:03:16.246724475 +0000 UTC m=+5339.376417076" watchObservedRunningTime="2025-10-08 08:03:16.253288273 +0000 UTC m=+5339.382980894" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.454805 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.456851 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.465452 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.470132 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.635722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e-kube-api-access-5pclh\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.635796 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.737370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e-kube-api-access-5pclh\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.737471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.744455 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.744529 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f797d63deae88ddbe8d2cd6667c683c58ffd141cb3b735b531eb19b91987de92/globalmount\"" pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.765642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e-kube-api-access-5pclh\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.785503 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") pod \"mariadb-copy-data\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " pod="openstack/mariadb-copy-data" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.797166 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bb9bl" Oct 08 08:03:17 crc kubenswrapper[4958]: I1008 08:03:17.806104 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 08 08:03:18 crc kubenswrapper[4958]: I1008 08:03:18.384120 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 08:03:19 crc kubenswrapper[4958]: I1008 08:03:19.246149 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e","Type":"ContainerStarted","Data":"30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2"} Oct 08 08:03:19 crc kubenswrapper[4958]: I1008 08:03:19.246592 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e","Type":"ContainerStarted","Data":"267188183c4b8978e4c374d8da7310837d5667892e3ef7f77ec8236bccca3eb8"} Oct 08 08:03:19 crc kubenswrapper[4958]: I1008 08:03:19.272487 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.272454984 podStartE2EDuration="3.272454984s" podCreationTimestamp="2025-10-08 08:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:03:19.264382586 +0000 UTC m=+5342.394075227" watchObservedRunningTime="2025-10-08 08:03:19.272454984 +0000 UTC m=+5342.402147615" Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.655823 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.658557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.661309 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.828729 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kchvj\" (UniqueName: \"kubernetes.io/projected/56bdceb7-913d-4969-88a0-8c91c7a943b4-kube-api-access-kchvj\") pod \"mariadb-client\" (UID: \"56bdceb7-913d-4969-88a0-8c91c7a943b4\") " pod="openstack/mariadb-client" Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.930383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kchvj\" (UniqueName: \"kubernetes.io/projected/56bdceb7-913d-4969-88a0-8c91c7a943b4-kube-api-access-kchvj\") pod \"mariadb-client\" (UID: \"56bdceb7-913d-4969-88a0-8c91c7a943b4\") " pod="openstack/mariadb-client" Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.973964 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kchvj\" (UniqueName: \"kubernetes.io/projected/56bdceb7-913d-4969-88a0-8c91c7a943b4-kube-api-access-kchvj\") pod \"mariadb-client\" (UID: \"56bdceb7-913d-4969-88a0-8c91c7a943b4\") " pod="openstack/mariadb-client" Oct 08 08:03:21 crc kubenswrapper[4958]: I1008 08:03:21.997371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:22 crc kubenswrapper[4958]: I1008 08:03:22.475606 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:22 crc kubenswrapper[4958]: W1008 08:03:22.484842 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56bdceb7_913d_4969_88a0_8c91c7a943b4.slice/crio-191f3fbbc3154e8fd291eb065a99b044020ed5d1091b4bb6ebe4965841a757f8 WatchSource:0}: Error finding container 191f3fbbc3154e8fd291eb065a99b044020ed5d1091b4bb6ebe4965841a757f8: Status 404 returned error can't find the container with id 191f3fbbc3154e8fd291eb065a99b044020ed5d1091b4bb6ebe4965841a757f8 Oct 08 08:03:22 crc kubenswrapper[4958]: I1008 08:03:22.560048 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:22 crc kubenswrapper[4958]: I1008 08:03:22.560294 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:22 crc kubenswrapper[4958]: I1008 08:03:22.640257 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:23 crc kubenswrapper[4958]: I1008 08:03:23.278577 4958 generic.go:334] "Generic (PLEG): container finished" podID="56bdceb7-913d-4969-88a0-8c91c7a943b4" containerID="96e8ef716080c678f2d4de0df920363c6271e43e7c96c7ed6563e5966bc2cd68" exitCode=0 Oct 08 08:03:23 crc kubenswrapper[4958]: I1008 08:03:23.280702 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"56bdceb7-913d-4969-88a0-8c91c7a943b4","Type":"ContainerDied","Data":"96e8ef716080c678f2d4de0df920363c6271e43e7c96c7ed6563e5966bc2cd68"} Oct 08 08:03:23 crc kubenswrapper[4958]: I1008 08:03:23.280748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"56bdceb7-913d-4969-88a0-8c91c7a943b4","Type":"ContainerStarted","Data":"191f3fbbc3154e8fd291eb065a99b044020ed5d1091b4bb6ebe4965841a757f8"} Oct 08 08:03:23 crc kubenswrapper[4958]: I1008 08:03:23.371797 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:23 crc kubenswrapper[4958]: I1008 08:03:23.438658 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-555r5"] Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.710023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.739007 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_56bdceb7-913d-4969-88a0-8c91c7a943b4/mariadb-client/0.log" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.775687 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.788214 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kchvj\" (UniqueName: \"kubernetes.io/projected/56bdceb7-913d-4969-88a0-8c91c7a943b4-kube-api-access-kchvj\") pod \"56bdceb7-913d-4969-88a0-8c91c7a943b4\" (UID: \"56bdceb7-913d-4969-88a0-8c91c7a943b4\") " Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.789711 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.794137 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56bdceb7-913d-4969-88a0-8c91c7a943b4-kube-api-access-kchvj" (OuterVolumeSpecName: "kube-api-access-kchvj") pod "56bdceb7-913d-4969-88a0-8c91c7a943b4" (UID: "56bdceb7-913d-4969-88a0-8c91c7a943b4"). InnerVolumeSpecName "kube-api-access-kchvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.890299 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kchvj\" (UniqueName: \"kubernetes.io/projected/56bdceb7-913d-4969-88a0-8c91c7a943b4-kube-api-access-kchvj\") on node \"crc\" DevicePath \"\"" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.964035 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:24 crc kubenswrapper[4958]: E1008 08:03:24.964585 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bdceb7-913d-4969-88a0-8c91c7a943b4" containerName="mariadb-client" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.964619 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bdceb7-913d-4969-88a0-8c91c7a943b4" containerName="mariadb-client" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.964907 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="56bdceb7-913d-4969-88a0-8c91c7a943b4" containerName="mariadb-client" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.965792 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:24 crc kubenswrapper[4958]: I1008 08:03:24.978596 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.093736 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq645\" (UniqueName: \"kubernetes.io/projected/d2dc1144-eaf6-447a-b29f-075d979f09af-kube-api-access-wq645\") pod \"mariadb-client\" (UID: \"d2dc1144-eaf6-447a-b29f-075d979f09af\") " pod="openstack/mariadb-client" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.195293 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq645\" (UniqueName: \"kubernetes.io/projected/d2dc1144-eaf6-447a-b29f-075d979f09af-kube-api-access-wq645\") pod \"mariadb-client\" (UID: \"d2dc1144-eaf6-447a-b29f-075d979f09af\") " pod="openstack/mariadb-client" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.221481 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq645\" (UniqueName: \"kubernetes.io/projected/d2dc1144-eaf6-447a-b29f-075d979f09af-kube-api-access-wq645\") pod \"mariadb-client\" (UID: \"d2dc1144-eaf6-447a-b29f-075d979f09af\") " pod="openstack/mariadb-client" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.286184 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.300040 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.300075 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="191f3fbbc3154e8fd291eb065a99b044020ed5d1091b4bb6ebe4965841a757f8" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.300175 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-555r5" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="registry-server" containerID="cri-o://e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b" gracePeriod=2 Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.345755 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="56bdceb7-913d-4969-88a0-8c91c7a943b4" podUID="d2dc1144-eaf6-447a-b29f-075d979f09af" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.594641 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56bdceb7-913d-4969-88a0-8c91c7a943b4" path="/var/lib/kubelet/pods/56bdceb7-913d-4969-88a0-8c91c7a943b4/volumes" Oct 08 08:03:25 crc kubenswrapper[4958]: I1008 08:03:25.784516 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.238670 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.308617 4958 generic.go:334] "Generic (PLEG): container finished" podID="d2dc1144-eaf6-447a-b29f-075d979f09af" containerID="7b4385c1c5360b02cf9e4426bb24a407bc7efebb04f0c8630b2b0b076bb3c14c" exitCode=0 Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.308743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d2dc1144-eaf6-447a-b29f-075d979f09af","Type":"ContainerDied","Data":"7b4385c1c5360b02cf9e4426bb24a407bc7efebb04f0c8630b2b0b076bb3c14c"} Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.308810 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d2dc1144-eaf6-447a-b29f-075d979f09af","Type":"ContainerStarted","Data":"a2b9abb9a304289d68a78ba713c0adc7a07505a89851ad40dd54b08a083f1c2a"} Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.311460 4958 generic.go:334] "Generic (PLEG): container finished" podID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerID="e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b" exitCode=0 Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.311514 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-555r5" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.311545 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-555r5" event={"ID":"586dd8f0-caa2-425a-9f8c-d4d2ac809277","Type":"ContainerDied","Data":"e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b"} Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.311909 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-555r5" event={"ID":"586dd8f0-caa2-425a-9f8c-d4d2ac809277","Type":"ContainerDied","Data":"f7ebf2ccf955b8132f99910c4f270f080ee0e4964a14fed8ae1f7de4de6dc381"} Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.311940 4958 scope.go:117] "RemoveContainer" containerID="e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.333107 4958 scope.go:117] "RemoveContainer" containerID="5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.340508 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-utilities\") pod \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.340684 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drjdv\" (UniqueName: \"kubernetes.io/projected/586dd8f0-caa2-425a-9f8c-d4d2ac809277-kube-api-access-drjdv\") pod \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.340796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-catalog-content\") pod \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\" (UID: \"586dd8f0-caa2-425a-9f8c-d4d2ac809277\") " Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.341464 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-utilities" (OuterVolumeSpecName: "utilities") pod "586dd8f0-caa2-425a-9f8c-d4d2ac809277" (UID: "586dd8f0-caa2-425a-9f8c-d4d2ac809277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.348347 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586dd8f0-caa2-425a-9f8c-d4d2ac809277-kube-api-access-drjdv" (OuterVolumeSpecName: "kube-api-access-drjdv") pod "586dd8f0-caa2-425a-9f8c-d4d2ac809277" (UID: "586dd8f0-caa2-425a-9f8c-d4d2ac809277"). InnerVolumeSpecName "kube-api-access-drjdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.351397 4958 scope.go:117] "RemoveContainer" containerID="f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.367399 4958 scope.go:117] "RemoveContainer" containerID="e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b" Oct 08 08:03:26 crc kubenswrapper[4958]: E1008 08:03:26.368454 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b\": container with ID starting with e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b not found: ID does not exist" containerID="e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.368485 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b"} err="failed to get container status \"e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b\": rpc error: code = NotFound desc = could not find container \"e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b\": container with ID starting with e42b79f9bbfc2973536137c7990e644ca8040db2089a3ae92b08246e0c94b29b not found: ID does not exist" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.368507 4958 scope.go:117] "RemoveContainer" containerID="5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0" Oct 08 08:03:26 crc kubenswrapper[4958]: E1008 08:03:26.369020 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0\": container with ID starting with 5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0 not found: ID does not exist" containerID="5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.369050 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0"} err="failed to get container status \"5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0\": rpc error: code = NotFound desc = could not find container \"5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0\": container with ID starting with 5a72650688c42f39722d28e603d60a615710826ab6dfcb8e8bf120a2b009bca0 not found: ID does not exist" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.369066 4958 scope.go:117] "RemoveContainer" containerID="f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931" Oct 08 08:03:26 crc kubenswrapper[4958]: E1008 08:03:26.369333 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931\": container with ID starting with f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931 not found: ID does not exist" containerID="f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.369354 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931"} err="failed to get container status \"f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931\": rpc error: code = NotFound desc = could not find container \"f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931\": container with ID starting with f7e0b24110ffc5a12174cb486ecabb894f4f0d86462b48da0eb7e8511cb0a931 not found: ID does not exist" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.387358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "586dd8f0-caa2-425a-9f8c-d4d2ac809277" (UID: "586dd8f0-caa2-425a-9f8c-d4d2ac809277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.442335 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.442364 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drjdv\" (UniqueName: \"kubernetes.io/projected/586dd8f0-caa2-425a-9f8c-d4d2ac809277-kube-api-access-drjdv\") on node \"crc\" DevicePath \"\"" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.442377 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/586dd8f0-caa2-425a-9f8c-d4d2ac809277-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.689332 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-555r5"] Oct 08 08:03:26 crc kubenswrapper[4958]: I1008 08:03:26.702984 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-555r5"] Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.587101 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:03:27 crc kubenswrapper[4958]: E1008 08:03:27.587499 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.597162 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" path="/var/lib/kubelet/pods/586dd8f0-caa2-425a-9f8c-d4d2ac809277/volumes" Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.751594 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.801980 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d2dc1144-eaf6-447a-b29f-075d979f09af/mariadb-client/0.log" Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.832166 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.840313 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.878524 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq645\" (UniqueName: \"kubernetes.io/projected/d2dc1144-eaf6-447a-b29f-075d979f09af-kube-api-access-wq645\") pod \"d2dc1144-eaf6-447a-b29f-075d979f09af\" (UID: \"d2dc1144-eaf6-447a-b29f-075d979f09af\") " Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.882567 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2dc1144-eaf6-447a-b29f-075d979f09af-kube-api-access-wq645" (OuterVolumeSpecName: "kube-api-access-wq645") pod "d2dc1144-eaf6-447a-b29f-075d979f09af" (UID: "d2dc1144-eaf6-447a-b29f-075d979f09af"). InnerVolumeSpecName "kube-api-access-wq645". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:03:27 crc kubenswrapper[4958]: I1008 08:03:27.986097 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq645\" (UniqueName: \"kubernetes.io/projected/d2dc1144-eaf6-447a-b29f-075d979f09af-kube-api-access-wq645\") on node \"crc\" DevicePath \"\"" Oct 08 08:03:28 crc kubenswrapper[4958]: I1008 08:03:28.338417 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b9abb9a304289d68a78ba713c0adc7a07505a89851ad40dd54b08a083f1c2a" Oct 08 08:03:28 crc kubenswrapper[4958]: I1008 08:03:28.338485 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Oct 08 08:03:29 crc kubenswrapper[4958]: I1008 08:03:29.593824 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2dc1144-eaf6-447a-b29f-075d979f09af" path="/var/lib/kubelet/pods/d2dc1144-eaf6-447a-b29f-075d979f09af/volumes" Oct 08 08:03:40 crc kubenswrapper[4958]: I1008 08:03:40.576850 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:03:40 crc kubenswrapper[4958]: E1008 08:03:40.577783 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:03:54 crc kubenswrapper[4958]: I1008 08:03:54.576978 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:03:54 crc kubenswrapper[4958]: E1008 08:03:54.578069 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.134850 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 08:04:04 crc kubenswrapper[4958]: E1008 08:04:04.138023 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="registry-server" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.138235 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="registry-server" Oct 08 08:04:04 crc kubenswrapper[4958]: E1008 08:04:04.138459 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="extract-utilities" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.138588 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="extract-utilities" Oct 08 08:04:04 crc kubenswrapper[4958]: E1008 08:04:04.138787 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="extract-content" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.138925 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="extract-content" Oct 08 08:04:04 crc kubenswrapper[4958]: E1008 08:04:04.139099 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dc1144-eaf6-447a-b29f-075d979f09af" containerName="mariadb-client" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.139235 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dc1144-eaf6-447a-b29f-075d979f09af" containerName="mariadb-client" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.139642 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="586dd8f0-caa2-425a-9f8c-d4d2ac809277" containerName="registry-server" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.139810 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2dc1144-eaf6-447a-b29f-075d979f09af" containerName="mariadb-client" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.141435 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.145083 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.145589 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.145856 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-68bkk" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.145907 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.146528 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.149517 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.151331 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.181649 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.183462 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.198297 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.205607 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.267116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339369 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339421 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339478 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339500 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74a9f15c-71d3-477f-8e4a-91c191675628\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74a9f15c-71d3-477f-8e4a-91c191675628\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339563 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ab7032f-3737-4c89-a704-0c11294bcdcd-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339620 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ab7032f-3737-4c89-a704-0c11294bcdcd-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339660 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339700 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339738 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.339782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb02200a-982f-42a6-b28b-757980a5865e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb02200a-982f-42a6-b28b-757980a5865e\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340072 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340143 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d767b5-82f4-4695-943c-1553272efff6-config\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68d767b5-82f4-4695-943c-1553272efff6-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76cvs\" (UniqueName: \"kubernetes.io/projected/5ab7032f-3737-4c89-a704-0c11294bcdcd-kube-api-access-76cvs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340434 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-config\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340471 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrn7\" (UniqueName: \"kubernetes.io/projected/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-kube-api-access-ghrn7\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340513 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68d767b5-82f4-4695-943c-1553272efff6-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340662 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdwv\" (UniqueName: \"kubernetes.io/projected/68d767b5-82f4-4695-943c-1553272efff6-kube-api-access-frdwv\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.340693 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab7032f-3737-4c89-a704-0c11294bcdcd-config\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442711 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442740 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb02200a-982f-42a6-b28b-757980a5865e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb02200a-982f-42a6-b28b-757980a5865e\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442789 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442817 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d767b5-82f4-4695-943c-1553272efff6-config\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442886 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68d767b5-82f4-4695-943c-1553272efff6-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442915 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76cvs\" (UniqueName: \"kubernetes.io/projected/5ab7032f-3737-4c89-a704-0c11294bcdcd-kube-api-access-76cvs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-config\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.442989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrn7\" (UniqueName: \"kubernetes.io/projected/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-kube-api-access-ghrn7\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443021 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443047 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68d767b5-82f4-4695-943c-1553272efff6-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443073 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443098 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdwv\" (UniqueName: \"kubernetes.io/projected/68d767b5-82f4-4695-943c-1553272efff6-kube-api-access-frdwv\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443119 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab7032f-3737-4c89-a704-0c11294bcdcd-config\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443145 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443167 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443234 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443263 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74a9f15c-71d3-477f-8e4a-91c191675628\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74a9f15c-71d3-477f-8e4a-91c191675628\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ab7032f-3737-4c89-a704-0c11294bcdcd-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ab7032f-3737-4c89-a704-0c11294bcdcd-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.443329 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.444044 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68d767b5-82f4-4695-943c-1553272efff6-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.445019 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-config\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.445142 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.445165 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d767b5-82f4-4695-943c-1553272efff6-config\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.445531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5ab7032f-3737-4c89-a704-0c11294bcdcd-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.446359 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab7032f-3737-4c89-a704-0c11294bcdcd-config\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.446659 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ab7032f-3737-4c89-a704-0c11294bcdcd-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.453141 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68d767b5-82f4-4695-943c-1553272efff6-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.455433 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.464470 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.467436 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.467518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.467694 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.467714 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.467938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d767b5-82f4-4695-943c-1553272efff6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.468700 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.468733 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74a9f15c-71d3-477f-8e4a-91c191675628\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74a9f15c-71d3-477f-8e4a-91c191675628\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae06a4c02f28432d25696e7dc32140a2b080e65f06db295432993d3ac1a7e78f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.468996 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.469053 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb02200a-982f-42a6-b28b-757980a5865e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb02200a-982f-42a6-b28b-757980a5865e\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02dfd03e2dfb5738b3014e9697f87bb92102b786f144cadcbbd894f312f4d49b/globalmount\"" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.470485 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.472319 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.472515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.472681 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5915bf74c90725bbce45b6b91d9ea0bcd3494daeec32c2e93bb2803712d9b109/globalmount\"" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.475901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab7032f-3737-4c89-a704-0c11294bcdcd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.477568 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdwv\" (UniqueName: \"kubernetes.io/projected/68d767b5-82f4-4695-943c-1553272efff6-kube-api-access-frdwv\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.478254 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76cvs\" (UniqueName: \"kubernetes.io/projected/5ab7032f-3737-4c89-a704-0c11294bcdcd-kube-api-access-76cvs\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.494738 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrn7\" (UniqueName: \"kubernetes.io/projected/cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2-kube-api-access-ghrn7\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.526711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74a9f15c-71d3-477f-8e4a-91c191675628\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74a9f15c-71d3-477f-8e4a-91c191675628\") pod \"ovsdbserver-nb-0\" (UID: \"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2\") " pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.526930 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-85167c9b-5b2b-405c-bebe-dee4113490f8\") pod \"ovsdbserver-nb-2\" (UID: \"68d767b5-82f4-4695-943c-1553272efff6\") " pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.527829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb02200a-982f-42a6-b28b-757980a5865e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb02200a-982f-42a6-b28b-757980a5865e\") pod \"ovsdbserver-nb-1\" (UID: \"5ab7032f-3737-4c89-a704-0c11294bcdcd\") " pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.539173 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.565422 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:04 crc kubenswrapper[4958]: I1008 08:04:04.814518 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.085749 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 08 08:04:05 crc kubenswrapper[4958]: W1008 08:04:05.095597 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4d1e0b_93a4_41e6_8fdf_7760cd9f23b2.slice/crio-9710f1d0a695b94748a901fdf4f08492c4b299aa5f85b8c9fd1472e7ce9a5acd WatchSource:0}: Error finding container 9710f1d0a695b94748a901fdf4f08492c4b299aa5f85b8c9fd1472e7ce9a5acd: Status 404 returned error can't find the container with id 9710f1d0a695b94748a901fdf4f08492c4b299aa5f85b8c9fd1472e7ce9a5acd Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.650335 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.651625 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.659527 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.660120 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7dzt2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.660311 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.664545 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.683514 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.686657 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.690292 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.692779 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.699521 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.707862 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.717236 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.753688 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2","Type":"ContainerStarted","Data":"388d17f51a92b3ac726e89ecad8f4210d666c59260756224bb0da04106eca027"} Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.753742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2","Type":"ContainerStarted","Data":"846d4a0abbc2efffa570290d594338dcb881ca9d037d213d01ac7806056c67c8"} Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.753756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2","Type":"ContainerStarted","Data":"9710f1d0a695b94748a901fdf4f08492c4b299aa5f85b8c9fd1472e7ce9a5acd"} Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a658667d-f505-495a-bd15-ea752f8dd460\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a658667d-f505-495a-bd15-ea752f8dd460\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766698 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766726 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766766 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61178e97-c8df-469d-997f-753bce1d600e-config\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766908 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61178e97-c8df-469d-997f-753bce1d600e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766929 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.766985 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767039 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-52ead216-f841-40af-b9ad-b79204e522b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ead216-f841-40af-b9ad-b79204e522b8\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pjn6\" (UniqueName: \"kubernetes.io/projected/3274cda4-e5f8-4bf5-af02-d269282b98d8-kube-api-access-2pjn6\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767151 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3274cda4-e5f8-4bf5-af02-d269282b98d8-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767176 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjjg\" (UniqueName: \"kubernetes.io/projected/61178e97-c8df-469d-997f-753bce1d600e-kube-api-access-wdjjg\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/61178e97-c8df-469d-997f-753bce1d600e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767266 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ae295c4-128b-476d-801e-5b04f1a1eb58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8ae295c4-128b-476d-801e-5b04f1a1eb58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3274cda4-e5f8-4bf5-af02-d269282b98d8-config\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae295c4-128b-476d-801e-5b04f1a1eb58-config\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767362 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3274cda4-e5f8-4bf5-af02-d269282b98d8-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767404 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.767426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ksg\" (UniqueName: \"kubernetes.io/projected/8ae295c4-128b-476d-801e-5b04f1a1eb58-kube-api-access-m6ksg\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.793411 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.793390675 podStartE2EDuration="2.793390675s" podCreationTimestamp="2025-10-08 08:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:05.78211133 +0000 UTC m=+5388.911803941" watchObservedRunningTime="2025-10-08 08:04:05.793390675 +0000 UTC m=+5388.923083286" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.868595 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.868646 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.868671 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-52ead216-f841-40af-b9ad-b79204e522b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ead216-f841-40af-b9ad-b79204e522b8\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.868693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pjn6\" (UniqueName: \"kubernetes.io/projected/3274cda4-e5f8-4bf5-af02-d269282b98d8-kube-api-access-2pjn6\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3274cda4-e5f8-4bf5-af02-d269282b98d8-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdjjg\" (UniqueName: \"kubernetes.io/projected/61178e97-c8df-469d-997f-753bce1d600e-kube-api-access-wdjjg\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869235 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/61178e97-c8df-469d-997f-753bce1d600e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869252 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ae295c4-128b-476d-801e-5b04f1a1eb58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8ae295c4-128b-476d-801e-5b04f1a1eb58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3274cda4-e5f8-4bf5-af02-d269282b98d8-config\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869323 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae295c4-128b-476d-801e-5b04f1a1eb58-config\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869342 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3274cda4-e5f8-4bf5-af02-d269282b98d8-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869371 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869401 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ksg\" (UniqueName: \"kubernetes.io/projected/8ae295c4-128b-476d-801e-5b04f1a1eb58-kube-api-access-m6ksg\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869453 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a658667d-f505-495a-bd15-ea752f8dd460\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a658667d-f505-495a-bd15-ea752f8dd460\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869471 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869497 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869526 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61178e97-c8df-469d-997f-753bce1d600e-config\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869601 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869647 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869691 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61178e97-c8df-469d-997f-753bce1d600e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869709 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.869833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/61178e97-c8df-469d-997f-753bce1d600e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.870360 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3274cda4-e5f8-4bf5-af02-d269282b98d8-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.871481 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3274cda4-e5f8-4bf5-af02-d269282b98d8-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.873165 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.873185 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8ae295c4-128b-476d-801e-5b04f1a1eb58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.873222 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-52ead216-f841-40af-b9ad-b79204e522b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ead216-f841-40af-b9ad-b79204e522b8\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/702565db8ab3c2792f98e80f0d113e90762c6c3cdc50adf598fb5badf2fab743/globalmount\"" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.873328 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3274cda4-e5f8-4bf5-af02-d269282b98d8-config\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.873732 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae295c4-128b-476d-801e-5b04f1a1eb58-config\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.874773 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8ae295c4-128b-476d-801e-5b04f1a1eb58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.876063 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61178e97-c8df-469d-997f-753bce1d600e-config\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.877022 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.877522 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.877678 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/61178e97-c8df-469d-997f-753bce1d600e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.878407 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.879231 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.880076 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.881499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.883274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3274cda4-e5f8-4bf5-af02-d269282b98d8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.885564 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae295c4-128b-476d-801e-5b04f1a1eb58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.887617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61178e97-c8df-469d-997f-753bce1d600e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.888350 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.888387 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a5e52f74e06500233f2462895d099d479fadc18e8d22d41c103a18c910895913/globalmount\"" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.889650 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.891773 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pjn6\" (UniqueName: \"kubernetes.io/projected/3274cda4-e5f8-4bf5-af02-d269282b98d8-kube-api-access-2pjn6\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.892964 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.892993 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a658667d-f505-495a-bd15-ea752f8dd460\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a658667d-f505-495a-bd15-ea752f8dd460\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/809869ca5baea9bf5930f731380b936bf40b89b98d9e2b4f67269b19a8e488f1/globalmount\"" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.900934 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdjjg\" (UniqueName: \"kubernetes.io/projected/61178e97-c8df-469d-997f-753bce1d600e-kube-api-access-wdjjg\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: W1008 08:04:05.903578 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab7032f_3737_4c89_a704_0c11294bcdcd.slice/crio-c50ff3e392ca5d52f104c535bee05be242688d4603c04bd4214b70a1157ace9f WatchSource:0}: Error finding container c50ff3e392ca5d52f104c535bee05be242688d4603c04bd4214b70a1157ace9f: Status 404 returned error can't find the container with id c50ff3e392ca5d52f104c535bee05be242688d4603c04bd4214b70a1157ace9f Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.903837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ksg\" (UniqueName: \"kubernetes.io/projected/8ae295c4-128b-476d-801e-5b04f1a1eb58-kube-api-access-m6ksg\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.929677 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce73c22f-788f-4f96-b685-ea74e7bf0272\") pod \"ovsdbserver-sb-0\" (UID: \"8ae295c4-128b-476d-801e-5b04f1a1eb58\") " pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.931075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-52ead216-f841-40af-b9ad-b79204e522b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ead216-f841-40af-b9ad-b79204e522b8\") pod \"ovsdbserver-sb-1\" (UID: \"61178e97-c8df-469d-997f-753bce1d600e\") " pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.934519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a658667d-f505-495a-bd15-ea752f8dd460\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a658667d-f505-495a-bd15-ea752f8dd460\") pod \"ovsdbserver-sb-2\" (UID: \"3274cda4-e5f8-4bf5-af02-d269282b98d8\") " pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.973853 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:05 crc kubenswrapper[4958]: I1008 08:04:05.989025 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 08 08:04:05 crc kubenswrapper[4958]: W1008 08:04:05.995666 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68d767b5_82f4_4695_943c_1553272efff6.slice/crio-4f8b5bfe950a4f0aec09383b38f8bb0929d672b22f3aa8737f91da216be8cad3 WatchSource:0}: Error finding container 4f8b5bfe950a4f0aec09383b38f8bb0929d672b22f3aa8737f91da216be8cad3: Status 404 returned error can't find the container with id 4f8b5bfe950a4f0aec09383b38f8bb0929d672b22f3aa8737f91da216be8cad3 Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.015493 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.022960 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.433701 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Oct 08 08:04:06 crc kubenswrapper[4958]: W1008 08:04:06.438033 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61178e97_c8df_469d_997f_753bce1d600e.slice/crio-0bfd3f679f347323d0892a86d33cdcdbadfcf3b269740517701b2a93fe943bf4 WatchSource:0}: Error finding container 0bfd3f679f347323d0892a86d33cdcdbadfcf3b269740517701b2a93fe943bf4: Status 404 returned error can't find the container with id 0bfd3f679f347323d0892a86d33cdcdbadfcf3b269740517701b2a93fe943bf4 Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.565583 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 08 08:04:06 crc kubenswrapper[4958]: W1008 08:04:06.575993 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ae295c4_128b_476d_801e_5b04f1a1eb58.slice/crio-97eb1ddb37b108f67fb1f079b043020d9ae3cb4fe62fe8c82672812e59b48914 WatchSource:0}: Error finding container 97eb1ddb37b108f67fb1f079b043020d9ae3cb4fe62fe8c82672812e59b48914: Status 404 returned error can't find the container with id 97eb1ddb37b108f67fb1f079b043020d9ae3cb4fe62fe8c82672812e59b48914 Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.764500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5ab7032f-3737-4c89-a704-0c11294bcdcd","Type":"ContainerStarted","Data":"3c687dcfa9b56aa4cf36fe95ecce670de1464e4cc488e4e765856a6e5f45fd85"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.764554 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5ab7032f-3737-4c89-a704-0c11294bcdcd","Type":"ContainerStarted","Data":"37299909906e70f75ad291b130dadc1944767c8d90157eed3a18cb5ac6d4ff4f"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.764574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5ab7032f-3737-4c89-a704-0c11294bcdcd","Type":"ContainerStarted","Data":"c50ff3e392ca5d52f104c535bee05be242688d4603c04bd4214b70a1157ace9f"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.767208 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"61178e97-c8df-469d-997f-753bce1d600e","Type":"ContainerStarted","Data":"0bfd3f679f347323d0892a86d33cdcdbadfcf3b269740517701b2a93fe943bf4"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.769533 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"68d767b5-82f4-4695-943c-1553272efff6","Type":"ContainerStarted","Data":"008c83637ae6f5f50431031e786f35f2891a2b6cd42bef5c80b7587a45478e3e"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.769571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"68d767b5-82f4-4695-943c-1553272efff6","Type":"ContainerStarted","Data":"46396fca7ed58d780d5b5743880339853f28a227216207d4ed4268d7283fc3f9"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.769588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"68d767b5-82f4-4695-943c-1553272efff6","Type":"ContainerStarted","Data":"4f8b5bfe950a4f0aec09383b38f8bb0929d672b22f3aa8737f91da216be8cad3"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.771056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8ae295c4-128b-476d-801e-5b04f1a1eb58","Type":"ContainerStarted","Data":"97eb1ddb37b108f67fb1f079b043020d9ae3cb4fe62fe8c82672812e59b48914"} Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.792798 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.792777375 podStartE2EDuration="3.792777375s" podCreationTimestamp="2025-10-08 08:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:06.792748114 +0000 UTC m=+5389.922440745" watchObservedRunningTime="2025-10-08 08:04:06.792777375 +0000 UTC m=+5389.922469976" Oct 08 08:04:06 crc kubenswrapper[4958]: I1008 08:04:06.827385 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.827368691 podStartE2EDuration="3.827368691s" podCreationTimestamp="2025-10-08 08:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:06.824978817 +0000 UTC m=+5389.954671458" watchObservedRunningTime="2025-10-08 08:04:06.827368691 +0000 UTC m=+5389.957061282" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.091388 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Oct 08 08:04:07 crc kubenswrapper[4958]: W1008 08:04:07.097649 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3274cda4_e5f8_4bf5_af02_d269282b98d8.slice/crio-20153569776c728867adc4d4766f8d7f0679d2e1395239047e780164b4d29bdc WatchSource:0}: Error finding container 20153569776c728867adc4d4766f8d7f0679d2e1395239047e780164b4d29bdc: Status 404 returned error can't find the container with id 20153569776c728867adc4d4766f8d7f0679d2e1395239047e780164b4d29bdc Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.539307 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.566095 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.585828 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:04:07 crc kubenswrapper[4958]: E1008 08:04:07.586182 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.785345 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8ae295c4-128b-476d-801e-5b04f1a1eb58","Type":"ContainerStarted","Data":"57d3bf1aa46c668404486e84fbb3bb80d663b2750fa3c84d8dcf9052250c1423"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.785407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8ae295c4-128b-476d-801e-5b04f1a1eb58","Type":"ContainerStarted","Data":"203b0ea45f2e8500e7e4a6e5f7859f570280c9c0e6735d74c26d6827cb29447c"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.789797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"61178e97-c8df-469d-997f-753bce1d600e","Type":"ContainerStarted","Data":"2041a630b5279fcc4fcae6dfc73be13f65a32f0315ded0e8ccdf860efc244806"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.789848 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"61178e97-c8df-469d-997f-753bce1d600e","Type":"ContainerStarted","Data":"c3c2b67f49b8f43e27e24fc2ffd97aee517ca60cafcef24fa531d2a17ccc0623"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.792850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"3274cda4-e5f8-4bf5-af02-d269282b98d8","Type":"ContainerStarted","Data":"680344f19df2844566efd401f9c9964ba16fe5f1aa1ba7b43b1b3e2bbe3452d3"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.792922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"3274cda4-e5f8-4bf5-af02-d269282b98d8","Type":"ContainerStarted","Data":"28742a08568722393d9e08b3bfc08b0e0d3f1f9d1d52dfc40284ab0aeb89464d"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.792981 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"3274cda4-e5f8-4bf5-af02-d269282b98d8","Type":"ContainerStarted","Data":"20153569776c728867adc4d4766f8d7f0679d2e1395239047e780164b4d29bdc"} Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.818839 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.828220 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.828172231 podStartE2EDuration="3.828172231s" podCreationTimestamp="2025-10-08 08:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:07.815458656 +0000 UTC m=+5390.945151297" watchObservedRunningTime="2025-10-08 08:04:07.828172231 +0000 UTC m=+5390.957864872" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.850528 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.850506465 podStartE2EDuration="3.850506465s" podCreationTimestamp="2025-10-08 08:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:07.845453648 +0000 UTC m=+5390.975146289" watchObservedRunningTime="2025-10-08 08:04:07.850506465 +0000 UTC m=+5390.980199076" Oct 08 08:04:07 crc kubenswrapper[4958]: I1008 08:04:07.882122 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.88209083 podStartE2EDuration="3.88209083s" podCreationTimestamp="2025-10-08 08:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:07.874349111 +0000 UTC m=+5391.004041742" watchObservedRunningTime="2025-10-08 08:04:07.88209083 +0000 UTC m=+5391.011783471" Oct 08 08:04:08 crc kubenswrapper[4958]: I1008 08:04:08.974153 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.017310 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.023330 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.090517 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.539608 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.565776 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.815187 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:09 crc kubenswrapper[4958]: I1008 08:04:09.820006 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:10 crc kubenswrapper[4958]: I1008 08:04:10.615205 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:10 crc kubenswrapper[4958]: I1008 08:04:10.661899 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:10 crc kubenswrapper[4958]: I1008 08:04:10.874220 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:10 crc kubenswrapper[4958]: I1008 08:04:10.961117 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 08 08:04:10 crc kubenswrapper[4958]: I1008 08:04:10.974282 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.023563 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.079646 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.256833 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9564df97-z5grp"] Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.258590 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.260569 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.264192 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9564df97-z5grp"] Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.376031 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9564df97-z5grp"] Oct 08 08:04:11 crc kubenswrapper[4958]: E1008 08:04:11.376706 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-g4jr8 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-c9564df97-z5grp" podUID="19439a32-7584-4b2c-b71a-a8478a052cb2" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.388715 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-config\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.388773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-dns-svc\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.388792 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4jr8\" (UniqueName: \"kubernetes.io/projected/19439a32-7584-4b2c-b71a-a8478a052cb2-kube-api-access-g4jr8\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.388847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-ovsdbserver-nb\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.409564 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5899747f-dbwz6"] Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.410844 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.412613 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.421418 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5899747f-dbwz6"] Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-config\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-dns-svc\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490483 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4jr8\" (UniqueName: \"kubernetes.io/projected/19439a32-7584-4b2c-b71a-a8478a052cb2-kube-api-access-g4jr8\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490553 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-config\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490575 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-ovsdbserver-nb\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfr6w\" (UniqueName: \"kubernetes.io/projected/b6cb0c01-55d8-45da-ab84-4a423c424874-kube-api-access-lfr6w\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490608 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-dns-svc\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.490628 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.491266 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-config\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.491303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-dns-svc\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.491652 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-ovsdbserver-nb\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.509401 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4jr8\" (UniqueName: \"kubernetes.io/projected/19439a32-7584-4b2c-b71a-a8478a052cb2-kube-api-access-g4jr8\") pod \"dnsmasq-dns-c9564df97-z5grp\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.592143 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.592523 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-config\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.592834 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfr6w\" (UniqueName: \"kubernetes.io/projected/b6cb0c01-55d8-45da-ab84-4a423c424874-kube-api-access-lfr6w\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.593220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-dns-svc\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.593377 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-config\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.593695 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.593860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-dns-svc\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.593718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.594858 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.610908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfr6w\" (UniqueName: \"kubernetes.io/projected/b6cb0c01-55d8-45da-ab84-4a423c424874-kube-api-access-lfr6w\") pod \"dnsmasq-dns-5c5899747f-dbwz6\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.726185 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.842844 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.869236 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.900907 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-ovsdbserver-nb\") pod \"19439a32-7584-4b2c-b71a-a8478a052cb2\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.901013 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-dns-svc\") pod \"19439a32-7584-4b2c-b71a-a8478a052cb2\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.901067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-config\") pod \"19439a32-7584-4b2c-b71a-a8478a052cb2\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.901097 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4jr8\" (UniqueName: \"kubernetes.io/projected/19439a32-7584-4b2c-b71a-a8478a052cb2-kube-api-access-g4jr8\") pod \"19439a32-7584-4b2c-b71a-a8478a052cb2\" (UID: \"19439a32-7584-4b2c-b71a-a8478a052cb2\") " Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.902826 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19439a32-7584-4b2c-b71a-a8478a052cb2" (UID: "19439a32-7584-4b2c-b71a-a8478a052cb2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.903241 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "19439a32-7584-4b2c-b71a-a8478a052cb2" (UID: "19439a32-7584-4b2c-b71a-a8478a052cb2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.903505 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-config" (OuterVolumeSpecName: "config") pod "19439a32-7584-4b2c-b71a-a8478a052cb2" (UID: "19439a32-7584-4b2c-b71a-a8478a052cb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:04:11 crc kubenswrapper[4958]: I1008 08:04:11.926304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19439a32-7584-4b2c-b71a-a8478a052cb2-kube-api-access-g4jr8" (OuterVolumeSpecName: "kube-api-access-g4jr8") pod "19439a32-7584-4b2c-b71a-a8478a052cb2" (UID: "19439a32-7584-4b2c-b71a-a8478a052cb2"). InnerVolumeSpecName "kube-api-access-g4jr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.003164 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4jr8\" (UniqueName: \"kubernetes.io/projected/19439a32-7584-4b2c-b71a-a8478a052cb2-kube-api-access-g4jr8\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.003223 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.003241 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.003257 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19439a32-7584-4b2c-b71a-a8478a052cb2-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.031316 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.077916 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.101912 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.137097 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.222880 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5899747f-dbwz6"] Oct 08 08:04:12 crc kubenswrapper[4958]: W1008 08:04:12.231684 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cb0c01_55d8_45da_ab84_4a423c424874.slice/crio-34d8eb5c149beb22156f7d42cae98dbb987d576ea590e435473b1ca67e46e7ea WatchSource:0}: Error finding container 34d8eb5c149beb22156f7d42cae98dbb987d576ea590e435473b1ca67e46e7ea: Status 404 returned error can't find the container with id 34d8eb5c149beb22156f7d42cae98dbb987d576ea590e435473b1ca67e46e7ea Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.858764 4958 generic.go:334] "Generic (PLEG): container finished" podID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerID="821128a0d2e599be1bdd32c2c8a559c70063b122e09f91eb16a2677226975d35" exitCode=0 Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.859363 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9564df97-z5grp" Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.858833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" event={"ID":"b6cb0c01-55d8-45da-ab84-4a423c424874","Type":"ContainerDied","Data":"821128a0d2e599be1bdd32c2c8a559c70063b122e09f91eb16a2677226975d35"} Oct 08 08:04:12 crc kubenswrapper[4958]: I1008 08:04:12.859465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" event={"ID":"b6cb0c01-55d8-45da-ab84-4a423c424874","Type":"ContainerStarted","Data":"34d8eb5c149beb22156f7d42cae98dbb987d576ea590e435473b1ca67e46e7ea"} Oct 08 08:04:13 crc kubenswrapper[4958]: I1008 08:04:13.126001 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9564df97-z5grp"] Oct 08 08:04:13 crc kubenswrapper[4958]: I1008 08:04:13.131422 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9564df97-z5grp"] Oct 08 08:04:13 crc kubenswrapper[4958]: I1008 08:04:13.595900 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19439a32-7584-4b2c-b71a-a8478a052cb2" path="/var/lib/kubelet/pods/19439a32-7584-4b2c-b71a-a8478a052cb2/volumes" Oct 08 08:04:13 crc kubenswrapper[4958]: I1008 08:04:13.880311 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" event={"ID":"b6cb0c01-55d8-45da-ab84-4a423c424874","Type":"ContainerStarted","Data":"c3c3db933ef819abbaebb9c0c4558a1523aa3b3997a9744605d3ef838e45067d"} Oct 08 08:04:13 crc kubenswrapper[4958]: I1008 08:04:13.880740 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:13 crc kubenswrapper[4958]: I1008 08:04:13.928498 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" podStartSLOduration=2.928471738 podStartE2EDuration="2.928471738s" podCreationTimestamp="2025-10-08 08:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:13.924184142 +0000 UTC m=+5397.053876783" watchObservedRunningTime="2025-10-08 08:04:13.928471738 +0000 UTC m=+5397.058164369" Oct 08 08:04:14 crc kubenswrapper[4958]: I1008 08:04:14.621253 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Oct 08 08:04:14 crc kubenswrapper[4958]: I1008 08:04:14.630333 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.091633 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.094504 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.097665 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.105260 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.228938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr27k\" (UniqueName: \"kubernetes.io/projected/3145cf95-11d2-48dd-aa60-f8f4dc91d130-kube-api-access-kr27k\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.229136 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.229304 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3145cf95-11d2-48dd-aa60-f8f4dc91d130-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.331080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr27k\" (UniqueName: \"kubernetes.io/projected/3145cf95-11d2-48dd-aa60-f8f4dc91d130-kube-api-access-kr27k\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.331516 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.331897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3145cf95-11d2-48dd-aa60-f8f4dc91d130-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.334846 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.334909 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/801db15dd5b1f7eecd74092bde5bdd288ed906b91628afc1c7d27265bcbae334/globalmount\"" pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.349325 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3145cf95-11d2-48dd-aa60-f8f4dc91d130-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.354679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr27k\" (UniqueName: \"kubernetes.io/projected/3145cf95-11d2-48dd-aa60-f8f4dc91d130-kube-api-access-kr27k\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.375074 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") pod \"ovn-copy-data\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " pod="openstack/ovn-copy-data" Oct 08 08:04:18 crc kubenswrapper[4958]: I1008 08:04:18.427391 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 08 08:04:19 crc kubenswrapper[4958]: I1008 08:04:19.072570 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Oct 08 08:04:19 crc kubenswrapper[4958]: I1008 08:04:19.940339 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3145cf95-11d2-48dd-aa60-f8f4dc91d130","Type":"ContainerStarted","Data":"de6700f8df8cb0fa8995fee7b45dbeb2c06661c664ccac49fc4c908bf8cedd56"} Oct 08 08:04:20 crc kubenswrapper[4958]: I1008 08:04:20.950968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3145cf95-11d2-48dd-aa60-f8f4dc91d130","Type":"ContainerStarted","Data":"996dbf9acb795e8dfd22760f22da69c08a8c759c537d4f1cc1a09451a135fb7c"} Oct 08 08:04:20 crc kubenswrapper[4958]: I1008 08:04:20.983363 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.062408467 podStartE2EDuration="3.983325223s" podCreationTimestamp="2025-10-08 08:04:17 +0000 UTC" firstStartedPulling="2025-10-08 08:04:19.076499212 +0000 UTC m=+5402.206191813" lastFinishedPulling="2025-10-08 08:04:19.997415978 +0000 UTC m=+5403.127108569" observedRunningTime="2025-10-08 08:04:20.971864572 +0000 UTC m=+5404.101557173" watchObservedRunningTime="2025-10-08 08:04:20.983325223 +0000 UTC m=+5404.113017864" Oct 08 08:04:21 crc kubenswrapper[4958]: I1008 08:04:21.577592 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:04:21 crc kubenswrapper[4958]: E1008 08:04:21.578078 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:04:21 crc kubenswrapper[4958]: I1008 08:04:21.728190 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:04:21 crc kubenswrapper[4958]: I1008 08:04:21.823229 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-z75mj"] Oct 08 08:04:21 crc kubenswrapper[4958]: I1008 08:04:21.823566 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerName="dnsmasq-dns" containerID="cri-o://4301cca445a645afbc63dcd41edd11606362b15faab94927d1aa2ca6c68de1d6" gracePeriod=10 Oct 08 08:04:21 crc kubenswrapper[4958]: I1008 08:04:21.964599 4958 generic.go:334] "Generic (PLEG): container finished" podID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerID="4301cca445a645afbc63dcd41edd11606362b15faab94927d1aa2ca6c68de1d6" exitCode=0 Oct 08 08:04:21 crc kubenswrapper[4958]: I1008 08:04:21.964689 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" event={"ID":"de169cef-7e09-4cb9-b565-2f5291ad55f8","Type":"ContainerDied","Data":"4301cca445a645afbc63dcd41edd11606362b15faab94927d1aa2ca6c68de1d6"} Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.281580 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.404154 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-config\") pod \"de169cef-7e09-4cb9-b565-2f5291ad55f8\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.404286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdwn7\" (UniqueName: \"kubernetes.io/projected/de169cef-7e09-4cb9-b565-2f5291ad55f8-kube-api-access-qdwn7\") pod \"de169cef-7e09-4cb9-b565-2f5291ad55f8\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.404315 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-dns-svc\") pod \"de169cef-7e09-4cb9-b565-2f5291ad55f8\" (UID: \"de169cef-7e09-4cb9-b565-2f5291ad55f8\") " Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.409496 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de169cef-7e09-4cb9-b565-2f5291ad55f8-kube-api-access-qdwn7" (OuterVolumeSpecName: "kube-api-access-qdwn7") pod "de169cef-7e09-4cb9-b565-2f5291ad55f8" (UID: "de169cef-7e09-4cb9-b565-2f5291ad55f8"). InnerVolumeSpecName "kube-api-access-qdwn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.444484 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-config" (OuterVolumeSpecName: "config") pod "de169cef-7e09-4cb9-b565-2f5291ad55f8" (UID: "de169cef-7e09-4cb9-b565-2f5291ad55f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.446384 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de169cef-7e09-4cb9-b565-2f5291ad55f8" (UID: "de169cef-7e09-4cb9-b565-2f5291ad55f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.505877 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.505910 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdwn7\" (UniqueName: \"kubernetes.io/projected/de169cef-7e09-4cb9-b565-2f5291ad55f8-kube-api-access-qdwn7\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.505922 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de169cef-7e09-4cb9-b565-2f5291ad55f8-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.972343 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" event={"ID":"de169cef-7e09-4cb9-b565-2f5291ad55f8","Type":"ContainerDied","Data":"9027121a7b16144a6e4ec4c7f7ded33d1a9ffa4935d4ce2438e2e52cb1d90105"} Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.972402 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdc957c47-z75mj" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.973539 4958 scope.go:117] "RemoveContainer" containerID="4301cca445a645afbc63dcd41edd11606362b15faab94927d1aa2ca6c68de1d6" Oct 08 08:04:22 crc kubenswrapper[4958]: I1008 08:04:22.990187 4958 scope.go:117] "RemoveContainer" containerID="2e96dc1ea4f8c70ed2944c76915c899b2108bf03109d0f594f8ef5fa624027a9" Oct 08 08:04:23 crc kubenswrapper[4958]: I1008 08:04:23.003530 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-z75mj"] Oct 08 08:04:23 crc kubenswrapper[4958]: I1008 08:04:23.009727 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdc957c47-z75mj"] Oct 08 08:04:23 crc kubenswrapper[4958]: I1008 08:04:23.594517 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" path="/var/lib/kubelet/pods/de169cef-7e09-4cb9-b565-2f5291ad55f8/volumes" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.167330 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 08 08:04:27 crc kubenswrapper[4958]: E1008 08:04:27.168029 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerName="dnsmasq-dns" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.168041 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerName="dnsmasq-dns" Oct 08 08:04:27 crc kubenswrapper[4958]: E1008 08:04:27.168059 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerName="init" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.168065 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerName="init" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.168200 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="de169cef-7e09-4cb9-b565-2f5291ad55f8" containerName="dnsmasq-dns" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.168963 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.181112 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.181178 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.181275 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-f9s85" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.181328 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.203626 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293037 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293083 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293112 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9697b696-83b6-40de-a443-9705c0475f3c-config\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9697b696-83b6-40de-a443-9705c0475f3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293255 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9697b696-83b6-40de-a443-9705c0475f3c-scripts\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.293429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87ms\" (UniqueName: \"kubernetes.io/projected/9697b696-83b6-40de-a443-9705c0475f3c-kube-api-access-v87ms\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395258 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87ms\" (UniqueName: \"kubernetes.io/projected/9697b696-83b6-40de-a443-9705c0475f3c-kube-api-access-v87ms\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395367 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395388 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9697b696-83b6-40de-a443-9705c0475f3c-config\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395406 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9697b696-83b6-40de-a443-9705c0475f3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395447 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9697b696-83b6-40de-a443-9705c0475f3c-scripts\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.395464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.396684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9697b696-83b6-40de-a443-9705c0475f3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.397120 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9697b696-83b6-40de-a443-9705c0475f3c-scripts\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.397384 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9697b696-83b6-40de-a443-9705c0475f3c-config\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.401488 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.402817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.409754 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9697b696-83b6-40de-a443-9705c0475f3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.413679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87ms\" (UniqueName: \"kubernetes.io/projected/9697b696-83b6-40de-a443-9705c0475f3c-kube-api-access-v87ms\") pod \"ovn-northd-0\" (UID: \"9697b696-83b6-40de-a443-9705c0475f3c\") " pod="openstack/ovn-northd-0" Oct 08 08:04:27 crc kubenswrapper[4958]: I1008 08:04:27.495929 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 08 08:04:28 crc kubenswrapper[4958]: W1008 08:04:28.066294 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9697b696_83b6_40de_a443_9705c0475f3c.slice/crio-34e716263ec16d86a6e203a917c61dc35bccbb9405f7fbc2072bc90fec98f91e WatchSource:0}: Error finding container 34e716263ec16d86a6e203a917c61dc35bccbb9405f7fbc2072bc90fec98f91e: Status 404 returned error can't find the container with id 34e716263ec16d86a6e203a917c61dc35bccbb9405f7fbc2072bc90fec98f91e Oct 08 08:04:28 crc kubenswrapper[4958]: I1008 08:04:28.068376 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 08 08:04:29 crc kubenswrapper[4958]: I1008 08:04:29.044819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9697b696-83b6-40de-a443-9705c0475f3c","Type":"ContainerStarted","Data":"0dfaf6c53092f8b1bc61d1c57874c4baf9e77bd9dff5c6bb20e59402b4c8e3ca"} Oct 08 08:04:29 crc kubenswrapper[4958]: I1008 08:04:29.045334 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 08 08:04:29 crc kubenswrapper[4958]: I1008 08:04:29.045358 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9697b696-83b6-40de-a443-9705c0475f3c","Type":"ContainerStarted","Data":"252b8bb9c6e4300504b5bf35310e99b3c5d599c3ea02678b0df2e33c8d4b2d89"} Oct 08 08:04:29 crc kubenswrapper[4958]: I1008 08:04:29.045376 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9697b696-83b6-40de-a443-9705c0475f3c","Type":"ContainerStarted","Data":"34e716263ec16d86a6e203a917c61dc35bccbb9405f7fbc2072bc90fec98f91e"} Oct 08 08:04:29 crc kubenswrapper[4958]: I1008 08:04:29.087004 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.086931875 podStartE2EDuration="2.086931875s" podCreationTimestamp="2025-10-08 08:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:29.07418172 +0000 UTC m=+5412.203874411" watchObservedRunningTime="2025-10-08 08:04:29.086931875 +0000 UTC m=+5412.216624506" Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.243394 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gl9jm"] Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.245160 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.261144 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gl9jm"] Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.337626 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl92s\" (UniqueName: \"kubernetes.io/projected/480ac42e-df8d-4be0-bbe4-70e5195aa5cd-kube-api-access-jl92s\") pod \"keystone-db-create-gl9jm\" (UID: \"480ac42e-df8d-4be0-bbe4-70e5195aa5cd\") " pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.439053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl92s\" (UniqueName: \"kubernetes.io/projected/480ac42e-df8d-4be0-bbe4-70e5195aa5cd-kube-api-access-jl92s\") pod \"keystone-db-create-gl9jm\" (UID: \"480ac42e-df8d-4be0-bbe4-70e5195aa5cd\") " pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.466394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl92s\" (UniqueName: \"kubernetes.io/projected/480ac42e-df8d-4be0-bbe4-70e5195aa5cd-kube-api-access-jl92s\") pod \"keystone-db-create-gl9jm\" (UID: \"480ac42e-df8d-4be0-bbe4-70e5195aa5cd\") " pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:33 crc kubenswrapper[4958]: I1008 08:04:33.573525 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:34 crc kubenswrapper[4958]: I1008 08:04:34.061865 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gl9jm"] Oct 08 08:04:34 crc kubenswrapper[4958]: I1008 08:04:34.085170 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gl9jm" event={"ID":"480ac42e-df8d-4be0-bbe4-70e5195aa5cd","Type":"ContainerStarted","Data":"51d189170abb745ce4d61000525a5bfa7ca13a20966eb19934cb959a2b7ec0dc"} Oct 08 08:04:35 crc kubenswrapper[4958]: I1008 08:04:35.098524 4958 generic.go:334] "Generic (PLEG): container finished" podID="480ac42e-df8d-4be0-bbe4-70e5195aa5cd" containerID="2c08669a62195090d46480383589109218076d0c333a3dd4db91e66f6cfffdee" exitCode=0 Oct 08 08:04:35 crc kubenswrapper[4958]: I1008 08:04:35.098633 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gl9jm" event={"ID":"480ac42e-df8d-4be0-bbe4-70e5195aa5cd","Type":"ContainerDied","Data":"2c08669a62195090d46480383589109218076d0c333a3dd4db91e66f6cfffdee"} Oct 08 08:04:36 crc kubenswrapper[4958]: I1008 08:04:36.504990 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:36 crc kubenswrapper[4958]: I1008 08:04:36.577229 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:04:36 crc kubenswrapper[4958]: E1008 08:04:36.578483 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:04:36 crc kubenswrapper[4958]: I1008 08:04:36.597124 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl92s\" (UniqueName: \"kubernetes.io/projected/480ac42e-df8d-4be0-bbe4-70e5195aa5cd-kube-api-access-jl92s\") pod \"480ac42e-df8d-4be0-bbe4-70e5195aa5cd\" (UID: \"480ac42e-df8d-4be0-bbe4-70e5195aa5cd\") " Oct 08 08:04:36 crc kubenswrapper[4958]: I1008 08:04:36.614086 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480ac42e-df8d-4be0-bbe4-70e5195aa5cd-kube-api-access-jl92s" (OuterVolumeSpecName: "kube-api-access-jl92s") pod "480ac42e-df8d-4be0-bbe4-70e5195aa5cd" (UID: "480ac42e-df8d-4be0-bbe4-70e5195aa5cd"). InnerVolumeSpecName "kube-api-access-jl92s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:04:36 crc kubenswrapper[4958]: I1008 08:04:36.704042 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl92s\" (UniqueName: \"kubernetes.io/projected/480ac42e-df8d-4be0-bbe4-70e5195aa5cd-kube-api-access-jl92s\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:37 crc kubenswrapper[4958]: I1008 08:04:37.123281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gl9jm" event={"ID":"480ac42e-df8d-4be0-bbe4-70e5195aa5cd","Type":"ContainerDied","Data":"51d189170abb745ce4d61000525a5bfa7ca13a20966eb19934cb959a2b7ec0dc"} Oct 08 08:04:37 crc kubenswrapper[4958]: I1008 08:04:37.123344 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d189170abb745ce4d61000525a5bfa7ca13a20966eb19934cb959a2b7ec0dc" Oct 08 08:04:37 crc kubenswrapper[4958]: I1008 08:04:37.123370 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gl9jm" Oct 08 08:04:42 crc kubenswrapper[4958]: I1008 08:04:42.584717 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.388128 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-92d9-account-create-h22m8"] Oct 08 08:04:43 crc kubenswrapper[4958]: E1008 08:04:43.389496 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480ac42e-df8d-4be0-bbe4-70e5195aa5cd" containerName="mariadb-database-create" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.389525 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="480ac42e-df8d-4be0-bbe4-70e5195aa5cd" containerName="mariadb-database-create" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.389764 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="480ac42e-df8d-4be0-bbe4-70e5195aa5cd" containerName="mariadb-database-create" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.390875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.401139 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.408426 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-92d9-account-create-h22m8"] Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.537697 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxw9x\" (UniqueName: \"kubernetes.io/projected/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44-kube-api-access-rxw9x\") pod \"keystone-92d9-account-create-h22m8\" (UID: \"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44\") " pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.639543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxw9x\" (UniqueName: \"kubernetes.io/projected/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44-kube-api-access-rxw9x\") pod \"keystone-92d9-account-create-h22m8\" (UID: \"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44\") " pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.678804 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxw9x\" (UniqueName: \"kubernetes.io/projected/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44-kube-api-access-rxw9x\") pod \"keystone-92d9-account-create-h22m8\" (UID: \"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44\") " pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:43 crc kubenswrapper[4958]: I1008 08:04:43.714219 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:44 crc kubenswrapper[4958]: I1008 08:04:44.239083 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-92d9-account-create-h22m8"] Oct 08 08:04:45 crc kubenswrapper[4958]: I1008 08:04:45.227061 4958 generic.go:334] "Generic (PLEG): container finished" podID="b0bfc2f8-79d0-4cba-bebd-caa291c9aa44" containerID="f6dc98b678362491dd85e6c0c1aedc3270eff489c76277354a8c2c510a023028" exitCode=0 Oct 08 08:04:45 crc kubenswrapper[4958]: I1008 08:04:45.227213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-92d9-account-create-h22m8" event={"ID":"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44","Type":"ContainerDied","Data":"f6dc98b678362491dd85e6c0c1aedc3270eff489c76277354a8c2c510a023028"} Oct 08 08:04:45 crc kubenswrapper[4958]: I1008 08:04:45.229012 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-92d9-account-create-h22m8" event={"ID":"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44","Type":"ContainerStarted","Data":"a2f3cb1100dab4cf465d5d08bc850b95539ddfb83871d35955ea3de37a29d6af"} Oct 08 08:04:46 crc kubenswrapper[4958]: I1008 08:04:46.653787 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:46 crc kubenswrapper[4958]: I1008 08:04:46.815995 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxw9x\" (UniqueName: \"kubernetes.io/projected/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44-kube-api-access-rxw9x\") pod \"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44\" (UID: \"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44\") " Oct 08 08:04:46 crc kubenswrapper[4958]: I1008 08:04:46.820731 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44-kube-api-access-rxw9x" (OuterVolumeSpecName: "kube-api-access-rxw9x") pod "b0bfc2f8-79d0-4cba-bebd-caa291c9aa44" (UID: "b0bfc2f8-79d0-4cba-bebd-caa291c9aa44"). InnerVolumeSpecName "kube-api-access-rxw9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:04:46 crc kubenswrapper[4958]: I1008 08:04:46.920101 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxw9x\" (UniqueName: \"kubernetes.io/projected/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44-kube-api-access-rxw9x\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:47 crc kubenswrapper[4958]: I1008 08:04:47.257052 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-92d9-account-create-h22m8" event={"ID":"b0bfc2f8-79d0-4cba-bebd-caa291c9aa44","Type":"ContainerDied","Data":"a2f3cb1100dab4cf465d5d08bc850b95539ddfb83871d35955ea3de37a29d6af"} Oct 08 08:04:47 crc kubenswrapper[4958]: I1008 08:04:47.257120 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f3cb1100dab4cf465d5d08bc850b95539ddfb83871d35955ea3de37a29d6af" Oct 08 08:04:47 crc kubenswrapper[4958]: I1008 08:04:47.257174 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-92d9-account-create-h22m8" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.787226 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-t8f9x"] Oct 08 08:04:48 crc kubenswrapper[4958]: E1008 08:04:48.787917 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bfc2f8-79d0-4cba-bebd-caa291c9aa44" containerName="mariadb-account-create" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.787935 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bfc2f8-79d0-4cba-bebd-caa291c9aa44" containerName="mariadb-account-create" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.788210 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bfc2f8-79d0-4cba-bebd-caa291c9aa44" containerName="mariadb-account-create" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.788911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.791495 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.792218 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.793850 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.798109 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qrfkz" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.807249 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t8f9x"] Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.964668 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4dj8\" (UniqueName: \"kubernetes.io/projected/13d8b431-762b-4977-b54d-04c9fd7cc9e4-kube-api-access-m4dj8\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.965126 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-combined-ca-bundle\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:48 crc kubenswrapper[4958]: I1008 08:04:48.965231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-config-data\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.066507 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4dj8\" (UniqueName: \"kubernetes.io/projected/13d8b431-762b-4977-b54d-04c9fd7cc9e4-kube-api-access-m4dj8\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.066675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-combined-ca-bundle\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.066712 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-config-data\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.075663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-config-data\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.076547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-combined-ca-bundle\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.090410 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4dj8\" (UniqueName: \"kubernetes.io/projected/13d8b431-762b-4977-b54d-04c9fd7cc9e4-kube-api-access-m4dj8\") pod \"keystone-db-sync-t8f9x\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.108171 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:49 crc kubenswrapper[4958]: I1008 08:04:49.415525 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t8f9x"] Oct 08 08:04:50 crc kubenswrapper[4958]: I1008 08:04:50.285484 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t8f9x" event={"ID":"13d8b431-762b-4977-b54d-04c9fd7cc9e4","Type":"ContainerStarted","Data":"05b96fe9dcdb9fae7f3c3e47e8cfa396d8243e232e5012518cae06fa888d1004"} Oct 08 08:04:50 crc kubenswrapper[4958]: I1008 08:04:50.286031 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t8f9x" event={"ID":"13d8b431-762b-4977-b54d-04c9fd7cc9e4","Type":"ContainerStarted","Data":"8b1167e73a11033d0b829a528d75a1456396615fee3e588a293610d25acbf600"} Oct 08 08:04:50 crc kubenswrapper[4958]: I1008 08:04:50.317030 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-t8f9x" podStartSLOduration=2.316999452 podStartE2EDuration="2.316999452s" podCreationTimestamp="2025-10-08 08:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:50.310077425 +0000 UTC m=+5433.439770046" watchObservedRunningTime="2025-10-08 08:04:50.316999452 +0000 UTC m=+5433.446692063" Oct 08 08:04:50 crc kubenswrapper[4958]: I1008 08:04:50.576392 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:04:50 crc kubenswrapper[4958]: E1008 08:04:50.576979 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:04:51 crc kubenswrapper[4958]: I1008 08:04:51.297021 4958 generic.go:334] "Generic (PLEG): container finished" podID="13d8b431-762b-4977-b54d-04c9fd7cc9e4" containerID="05b96fe9dcdb9fae7f3c3e47e8cfa396d8243e232e5012518cae06fa888d1004" exitCode=0 Oct 08 08:04:51 crc kubenswrapper[4958]: I1008 08:04:51.297103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t8f9x" event={"ID":"13d8b431-762b-4977-b54d-04c9fd7cc9e4","Type":"ContainerDied","Data":"05b96fe9dcdb9fae7f3c3e47e8cfa396d8243e232e5012518cae06fa888d1004"} Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.700284 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.843085 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-combined-ca-bundle\") pod \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.843331 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-config-data\") pod \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.843509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4dj8\" (UniqueName: \"kubernetes.io/projected/13d8b431-762b-4977-b54d-04c9fd7cc9e4-kube-api-access-m4dj8\") pod \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\" (UID: \"13d8b431-762b-4977-b54d-04c9fd7cc9e4\") " Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.851042 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d8b431-762b-4977-b54d-04c9fd7cc9e4-kube-api-access-m4dj8" (OuterVolumeSpecName: "kube-api-access-m4dj8") pod "13d8b431-762b-4977-b54d-04c9fd7cc9e4" (UID: "13d8b431-762b-4977-b54d-04c9fd7cc9e4"). InnerVolumeSpecName "kube-api-access-m4dj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.884156 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13d8b431-762b-4977-b54d-04c9fd7cc9e4" (UID: "13d8b431-762b-4977-b54d-04c9fd7cc9e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.898379 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-config-data" (OuterVolumeSpecName: "config-data") pod "13d8b431-762b-4977-b54d-04c9fd7cc9e4" (UID: "13d8b431-762b-4977-b54d-04c9fd7cc9e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.946072 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4dj8\" (UniqueName: \"kubernetes.io/projected/13d8b431-762b-4977-b54d-04c9fd7cc9e4-kube-api-access-m4dj8\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.946126 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:52 crc kubenswrapper[4958]: I1008 08:04:52.946147 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d8b431-762b-4977-b54d-04c9fd7cc9e4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.317820 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t8f9x" event={"ID":"13d8b431-762b-4977-b54d-04c9fd7cc9e4","Type":"ContainerDied","Data":"8b1167e73a11033d0b829a528d75a1456396615fee3e588a293610d25acbf600"} Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.318126 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b1167e73a11033d0b829a528d75a1456396615fee3e588a293610d25acbf600" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.317901 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t8f9x" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.574753 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6849ffbbb9-5hd2h"] Oct 08 08:04:53 crc kubenswrapper[4958]: E1008 08:04:53.575142 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d8b431-762b-4977-b54d-04c9fd7cc9e4" containerName="keystone-db-sync" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.575162 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d8b431-762b-4977-b54d-04c9fd7cc9e4" containerName="keystone-db-sync" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.575418 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d8b431-762b-4977-b54d-04c9fd7cc9e4" containerName="keystone-db-sync" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.576455 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.606848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6849ffbbb9-5hd2h"] Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.616135 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q4k7q"] Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.617304 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.622403 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.622599 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qrfkz" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.622752 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.626895 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.648014 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q4k7q"] Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.660247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-config\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.660306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-dns-svc\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.660348 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.660391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcqd\" (UniqueName: \"kubernetes.io/projected/d1047855-329a-4c72-84b5-9a689472bcc6-kube-api-access-9tcqd\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.660418 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-nb\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.761869 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-combined-ca-bundle\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.761921 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tcqd\" (UniqueName: \"kubernetes.io/projected/d1047855-329a-4c72-84b5-9a689472bcc6-kube-api-access-9tcqd\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.761987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-nb\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762096 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-credential-keys\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-fernet-keys\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762350 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfgg\" (UniqueName: \"kubernetes.io/projected/44462e76-202c-4bbb-b1ac-4a4e50b0320d-kube-api-access-pzfgg\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762390 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-config\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762446 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-config-data\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762487 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-dns-svc\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762546 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-scripts\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.762674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-nb\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.763453 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-config\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.763583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.763978 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-dns-svc\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.779494 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tcqd\" (UniqueName: \"kubernetes.io/projected/d1047855-329a-4c72-84b5-9a689472bcc6-kube-api-access-9tcqd\") pod \"dnsmasq-dns-6849ffbbb9-5hd2h\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.872964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-config-data\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.873057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-scripts\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.873141 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-combined-ca-bundle\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.873196 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-credential-keys\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.873243 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-fernet-keys\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.873308 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfgg\" (UniqueName: \"kubernetes.io/projected/44462e76-202c-4bbb-b1ac-4a4e50b0320d-kube-api-access-pzfgg\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.878741 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-combined-ca-bundle\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.881442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-credential-keys\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.886704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-config-data\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.888539 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-scripts\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.890309 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-fernet-keys\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.897605 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfgg\" (UniqueName: \"kubernetes.io/projected/44462e76-202c-4bbb-b1ac-4a4e50b0320d-kube-api-access-pzfgg\") pod \"keystone-bootstrap-q4k7q\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.906004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:53 crc kubenswrapper[4958]: I1008 08:04:53.938253 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:54 crc kubenswrapper[4958]: W1008 08:04:54.377562 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1047855_329a_4c72_84b5_9a689472bcc6.slice/crio-0d69ef7b3f91be9f5d60014ab984746e785fbc09d7a5572f54ab45e52628b963 WatchSource:0}: Error finding container 0d69ef7b3f91be9f5d60014ab984746e785fbc09d7a5572f54ab45e52628b963: Status 404 returned error can't find the container with id 0d69ef7b3f91be9f5d60014ab984746e785fbc09d7a5572f54ab45e52628b963 Oct 08 08:04:54 crc kubenswrapper[4958]: I1008 08:04:54.380392 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6849ffbbb9-5hd2h"] Oct 08 08:04:54 crc kubenswrapper[4958]: I1008 08:04:54.455328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q4k7q"] Oct 08 08:04:54 crc kubenswrapper[4958]: W1008 08:04:54.481014 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44462e76_202c_4bbb_b1ac_4a4e50b0320d.slice/crio-4f614a1ddf635eb8a953b3cba1bfc42071e5fb927adf2bf9b49a0b4d62b434f9 WatchSource:0}: Error finding container 4f614a1ddf635eb8a953b3cba1bfc42071e5fb927adf2bf9b49a0b4d62b434f9: Status 404 returned error can't find the container with id 4f614a1ddf635eb8a953b3cba1bfc42071e5fb927adf2bf9b49a0b4d62b434f9 Oct 08 08:04:55 crc kubenswrapper[4958]: I1008 08:04:55.338087 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1047855-329a-4c72-84b5-9a689472bcc6" containerID="e06e014c2d9acd4aeeb7790e4862890ec0678829fab5f3d13af4c49264109db3" exitCode=0 Oct 08 08:04:55 crc kubenswrapper[4958]: I1008 08:04:55.338162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" event={"ID":"d1047855-329a-4c72-84b5-9a689472bcc6","Type":"ContainerDied","Data":"e06e014c2d9acd4aeeb7790e4862890ec0678829fab5f3d13af4c49264109db3"} Oct 08 08:04:55 crc kubenswrapper[4958]: I1008 08:04:55.339257 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" event={"ID":"d1047855-329a-4c72-84b5-9a689472bcc6","Type":"ContainerStarted","Data":"0d69ef7b3f91be9f5d60014ab984746e785fbc09d7a5572f54ab45e52628b963"} Oct 08 08:04:55 crc kubenswrapper[4958]: I1008 08:04:55.341290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4k7q" event={"ID":"44462e76-202c-4bbb-b1ac-4a4e50b0320d","Type":"ContainerStarted","Data":"05ec61d6b5c4627d4045dc58c3aad92012b4d497011c607bfb3612a4d2d20fb9"} Oct 08 08:04:55 crc kubenswrapper[4958]: I1008 08:04:55.341374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4k7q" event={"ID":"44462e76-202c-4bbb-b1ac-4a4e50b0320d","Type":"ContainerStarted","Data":"4f614a1ddf635eb8a953b3cba1bfc42071e5fb927adf2bf9b49a0b4d62b434f9"} Oct 08 08:04:55 crc kubenswrapper[4958]: I1008 08:04:55.397823 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q4k7q" podStartSLOduration=2.397802796 podStartE2EDuration="2.397802796s" podCreationTimestamp="2025-10-08 08:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:55.384559007 +0000 UTC m=+5438.514251608" watchObservedRunningTime="2025-10-08 08:04:55.397802796 +0000 UTC m=+5438.527495397" Oct 08 08:04:56 crc kubenswrapper[4958]: I1008 08:04:56.353139 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" event={"ID":"d1047855-329a-4c72-84b5-9a689472bcc6","Type":"ContainerStarted","Data":"611ea312a9b989057e7b81c78f2917572f1d374e83d13f917c5b841cbc867fc9"} Oct 08 08:04:56 crc kubenswrapper[4958]: I1008 08:04:56.384071 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" podStartSLOduration=3.3840466 podStartE2EDuration="3.3840466s" podCreationTimestamp="2025-10-08 08:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:04:56.377099822 +0000 UTC m=+5439.506792483" watchObservedRunningTime="2025-10-08 08:04:56.3840466 +0000 UTC m=+5439.513739231" Oct 08 08:04:57 crc kubenswrapper[4958]: I1008 08:04:57.362867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:04:58 crc kubenswrapper[4958]: I1008 08:04:58.380324 4958 generic.go:334] "Generic (PLEG): container finished" podID="44462e76-202c-4bbb-b1ac-4a4e50b0320d" containerID="05ec61d6b5c4627d4045dc58c3aad92012b4d497011c607bfb3612a4d2d20fb9" exitCode=0 Oct 08 08:04:58 crc kubenswrapper[4958]: I1008 08:04:58.380822 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4k7q" event={"ID":"44462e76-202c-4bbb-b1ac-4a4e50b0320d","Type":"ContainerDied","Data":"05ec61d6b5c4627d4045dc58c3aad92012b4d497011c607bfb3612a4d2d20fb9"} Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.892887 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.985359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-credential-keys\") pod \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.985536 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzfgg\" (UniqueName: \"kubernetes.io/projected/44462e76-202c-4bbb-b1ac-4a4e50b0320d-kube-api-access-pzfgg\") pod \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.985606 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-config-data\") pod \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.985690 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-fernet-keys\") pod \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.985716 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-scripts\") pod \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.985759 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-combined-ca-bundle\") pod \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\" (UID: \"44462e76-202c-4bbb-b1ac-4a4e50b0320d\") " Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.992257 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "44462e76-202c-4bbb-b1ac-4a4e50b0320d" (UID: "44462e76-202c-4bbb-b1ac-4a4e50b0320d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.993456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-scripts" (OuterVolumeSpecName: "scripts") pod "44462e76-202c-4bbb-b1ac-4a4e50b0320d" (UID: "44462e76-202c-4bbb-b1ac-4a4e50b0320d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.994213 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44462e76-202c-4bbb-b1ac-4a4e50b0320d-kube-api-access-pzfgg" (OuterVolumeSpecName: "kube-api-access-pzfgg") pod "44462e76-202c-4bbb-b1ac-4a4e50b0320d" (UID: "44462e76-202c-4bbb-b1ac-4a4e50b0320d"). InnerVolumeSpecName "kube-api-access-pzfgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:04:59 crc kubenswrapper[4958]: I1008 08:04:59.994841 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "44462e76-202c-4bbb-b1ac-4a4e50b0320d" (UID: "44462e76-202c-4bbb-b1ac-4a4e50b0320d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.019257 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44462e76-202c-4bbb-b1ac-4a4e50b0320d" (UID: "44462e76-202c-4bbb-b1ac-4a4e50b0320d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.020658 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-config-data" (OuterVolumeSpecName: "config-data") pod "44462e76-202c-4bbb-b1ac-4a4e50b0320d" (UID: "44462e76-202c-4bbb-b1ac-4a4e50b0320d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.087860 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.087901 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.087912 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.087924 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.087934 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzfgg\" (UniqueName: \"kubernetes.io/projected/44462e76-202c-4bbb-b1ac-4a4e50b0320d-kube-api-access-pzfgg\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.087966 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44462e76-202c-4bbb-b1ac-4a4e50b0320d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.402894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4k7q" event={"ID":"44462e76-202c-4bbb-b1ac-4a4e50b0320d","Type":"ContainerDied","Data":"4f614a1ddf635eb8a953b3cba1bfc42071e5fb927adf2bf9b49a0b4d62b434f9"} Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.403307 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f614a1ddf635eb8a953b3cba1bfc42071e5fb927adf2bf9b49a0b4d62b434f9" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.403045 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4k7q" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.623503 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q4k7q"] Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.641171 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q4k7q"] Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.685680 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-df68z"] Oct 08 08:05:00 crc kubenswrapper[4958]: E1008 08:05:00.686033 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44462e76-202c-4bbb-b1ac-4a4e50b0320d" containerName="keystone-bootstrap" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.686046 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="44462e76-202c-4bbb-b1ac-4a4e50b0320d" containerName="keystone-bootstrap" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.686191 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="44462e76-202c-4bbb-b1ac-4a4e50b0320d" containerName="keystone-bootstrap" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.686717 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.688564 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qrfkz" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.690596 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.691102 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.691358 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.705678 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-df68z"] Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.801097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnh6h\" (UniqueName: \"kubernetes.io/projected/22902302-0a41-43b7-8f38-35521559ce16-kube-api-access-vnh6h\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.801218 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-config-data\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.801247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-combined-ca-bundle\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.801290 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-scripts\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.801315 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-fernet-keys\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.801350 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-credential-keys\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.903834 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-config-data\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.903987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-combined-ca-bundle\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.904247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-scripts\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.904325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-fernet-keys\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.904433 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-credential-keys\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.904689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnh6h\" (UniqueName: \"kubernetes.io/projected/22902302-0a41-43b7-8f38-35521559ce16-kube-api-access-vnh6h\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.909096 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-combined-ca-bundle\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.909187 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-config-data\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.909329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-scripts\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.910459 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-credential-keys\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.911203 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-fernet-keys\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:00 crc kubenswrapper[4958]: I1008 08:05:00.920583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnh6h\" (UniqueName: \"kubernetes.io/projected/22902302-0a41-43b7-8f38-35521559ce16-kube-api-access-vnh6h\") pod \"keystone-bootstrap-df68z\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:01 crc kubenswrapper[4958]: I1008 08:05:01.006424 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:01 crc kubenswrapper[4958]: I1008 08:05:01.515651 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-df68z"] Oct 08 08:05:01 crc kubenswrapper[4958]: I1008 08:05:01.598670 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44462e76-202c-4bbb-b1ac-4a4e50b0320d" path="/var/lib/kubelet/pods/44462e76-202c-4bbb-b1ac-4a4e50b0320d/volumes" Oct 08 08:05:02 crc kubenswrapper[4958]: I1008 08:05:02.425422 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df68z" event={"ID":"22902302-0a41-43b7-8f38-35521559ce16","Type":"ContainerStarted","Data":"2c72a30d62d0f67eef6fd66117a4f084650cc519df3e0bfe6116cb7f8804b880"} Oct 08 08:05:02 crc kubenswrapper[4958]: I1008 08:05:02.425922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df68z" event={"ID":"22902302-0a41-43b7-8f38-35521559ce16","Type":"ContainerStarted","Data":"645b8b59c1cde7ae514bbf9807ae4b28e0af9a00c41e9bc6d7458f091ffaa37d"} Oct 08 08:05:02 crc kubenswrapper[4958]: I1008 08:05:02.455184 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-df68z" podStartSLOduration=2.455153308 podStartE2EDuration="2.455153308s" podCreationTimestamp="2025-10-08 08:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:05:02.450762169 +0000 UTC m=+5445.580454810" watchObservedRunningTime="2025-10-08 08:05:02.455153308 +0000 UTC m=+5445.584845949" Oct 08 08:05:03 crc kubenswrapper[4958]: I1008 08:05:03.907306 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:05:03 crc kubenswrapper[4958]: I1008 08:05:03.982754 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5899747f-dbwz6"] Oct 08 08:05:03 crc kubenswrapper[4958]: I1008 08:05:03.983136 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerName="dnsmasq-dns" containerID="cri-o://c3c3db933ef819abbaebb9c0c4558a1523aa3b3997a9744605d3ef838e45067d" gracePeriod=10 Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.443629 4958 generic.go:334] "Generic (PLEG): container finished" podID="22902302-0a41-43b7-8f38-35521559ce16" containerID="2c72a30d62d0f67eef6fd66117a4f084650cc519df3e0bfe6116cb7f8804b880" exitCode=0 Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.443717 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df68z" event={"ID":"22902302-0a41-43b7-8f38-35521559ce16","Type":"ContainerDied","Data":"2c72a30d62d0f67eef6fd66117a4f084650cc519df3e0bfe6116cb7f8804b880"} Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.446503 4958 generic.go:334] "Generic (PLEG): container finished" podID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerID="c3c3db933ef819abbaebb9c0c4558a1523aa3b3997a9744605d3ef838e45067d" exitCode=0 Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.446532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" event={"ID":"b6cb0c01-55d8-45da-ab84-4a423c424874","Type":"ContainerDied","Data":"c3c3db933ef819abbaebb9c0c4558a1523aa3b3997a9744605d3ef838e45067d"} Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.446550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" event={"ID":"b6cb0c01-55d8-45da-ab84-4a423c424874","Type":"ContainerDied","Data":"34d8eb5c149beb22156f7d42cae98dbb987d576ea590e435473b1ca67e46e7ea"} Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.446561 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d8eb5c149beb22156f7d42cae98dbb987d576ea590e435473b1ca67e46e7ea" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.525537 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.578340 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:05:04 crc kubenswrapper[4958]: E1008 08:05:04.578568 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.706621 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-nb\") pod \"b6cb0c01-55d8-45da-ab84-4a423c424874\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.706687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfr6w\" (UniqueName: \"kubernetes.io/projected/b6cb0c01-55d8-45da-ab84-4a423c424874-kube-api-access-lfr6w\") pod \"b6cb0c01-55d8-45da-ab84-4a423c424874\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.706772 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-dns-svc\") pod \"b6cb0c01-55d8-45da-ab84-4a423c424874\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.706856 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-sb\") pod \"b6cb0c01-55d8-45da-ab84-4a423c424874\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.706917 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-config\") pod \"b6cb0c01-55d8-45da-ab84-4a423c424874\" (UID: \"b6cb0c01-55d8-45da-ab84-4a423c424874\") " Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.713741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cb0c01-55d8-45da-ab84-4a423c424874-kube-api-access-lfr6w" (OuterVolumeSpecName: "kube-api-access-lfr6w") pod "b6cb0c01-55d8-45da-ab84-4a423c424874" (UID: "b6cb0c01-55d8-45da-ab84-4a423c424874"). InnerVolumeSpecName "kube-api-access-lfr6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.762246 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6cb0c01-55d8-45da-ab84-4a423c424874" (UID: "b6cb0c01-55d8-45da-ab84-4a423c424874"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.770271 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6cb0c01-55d8-45da-ab84-4a423c424874" (UID: "b6cb0c01-55d8-45da-ab84-4a423c424874"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.772682 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6cb0c01-55d8-45da-ab84-4a423c424874" (UID: "b6cb0c01-55d8-45da-ab84-4a423c424874"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.780204 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-config" (OuterVolumeSpecName: "config") pod "b6cb0c01-55d8-45da-ab84-4a423c424874" (UID: "b6cb0c01-55d8-45da-ab84-4a423c424874"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.811486 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.811529 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfr6w\" (UniqueName: \"kubernetes.io/projected/b6cb0c01-55d8-45da-ab84-4a423c424874-kube-api-access-lfr6w\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.811546 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.811561 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:04 crc kubenswrapper[4958]: I1008 08:05:04.811572 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cb0c01-55d8-45da-ab84-4a423c424874-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:05 crc kubenswrapper[4958]: I1008 08:05:05.457617 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5899747f-dbwz6" Oct 08 08:05:05 crc kubenswrapper[4958]: I1008 08:05:05.518885 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5899747f-dbwz6"] Oct 08 08:05:05 crc kubenswrapper[4958]: I1008 08:05:05.529181 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5899747f-dbwz6"] Oct 08 08:05:05 crc kubenswrapper[4958]: I1008 08:05:05.592653 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" path="/var/lib/kubelet/pods/b6cb0c01-55d8-45da-ab84-4a423c424874/volumes" Oct 08 08:05:05 crc kubenswrapper[4958]: I1008 08:05:05.978593 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.139327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-config-data\") pod \"22902302-0a41-43b7-8f38-35521559ce16\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.139406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-scripts\") pod \"22902302-0a41-43b7-8f38-35521559ce16\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.139462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnh6h\" (UniqueName: \"kubernetes.io/projected/22902302-0a41-43b7-8f38-35521559ce16-kube-api-access-vnh6h\") pod \"22902302-0a41-43b7-8f38-35521559ce16\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.139513 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-combined-ca-bundle\") pod \"22902302-0a41-43b7-8f38-35521559ce16\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.139562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-credential-keys\") pod \"22902302-0a41-43b7-8f38-35521559ce16\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.139619 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-fernet-keys\") pod \"22902302-0a41-43b7-8f38-35521559ce16\" (UID: \"22902302-0a41-43b7-8f38-35521559ce16\") " Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.143509 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-scripts" (OuterVolumeSpecName: "scripts") pod "22902302-0a41-43b7-8f38-35521559ce16" (UID: "22902302-0a41-43b7-8f38-35521559ce16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.144569 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "22902302-0a41-43b7-8f38-35521559ce16" (UID: "22902302-0a41-43b7-8f38-35521559ce16"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.146022 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "22902302-0a41-43b7-8f38-35521559ce16" (UID: "22902302-0a41-43b7-8f38-35521559ce16"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.146561 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22902302-0a41-43b7-8f38-35521559ce16-kube-api-access-vnh6h" (OuterVolumeSpecName: "kube-api-access-vnh6h") pod "22902302-0a41-43b7-8f38-35521559ce16" (UID: "22902302-0a41-43b7-8f38-35521559ce16"). InnerVolumeSpecName "kube-api-access-vnh6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.175194 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-config-data" (OuterVolumeSpecName: "config-data") pod "22902302-0a41-43b7-8f38-35521559ce16" (UID: "22902302-0a41-43b7-8f38-35521559ce16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.177512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22902302-0a41-43b7-8f38-35521559ce16" (UID: "22902302-0a41-43b7-8f38-35521559ce16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.242133 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.242172 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnh6h\" (UniqueName: \"kubernetes.io/projected/22902302-0a41-43b7-8f38-35521559ce16-kube-api-access-vnh6h\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.242182 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.242191 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.242199 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.242207 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22902302-0a41-43b7-8f38-35521559ce16-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.475208 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-df68z" event={"ID":"22902302-0a41-43b7-8f38-35521559ce16","Type":"ContainerDied","Data":"645b8b59c1cde7ae514bbf9807ae4b28e0af9a00c41e9bc6d7458f091ffaa37d"} Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.475288 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="645b8b59c1cde7ae514bbf9807ae4b28e0af9a00c41e9bc6d7458f091ffaa37d" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.475440 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-df68z" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.585147 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-64cb5f589-zm624"] Oct 08 08:05:06 crc kubenswrapper[4958]: E1008 08:05:06.585508 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerName="dnsmasq-dns" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.585527 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerName="dnsmasq-dns" Oct 08 08:05:06 crc kubenswrapper[4958]: E1008 08:05:06.585560 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerName="init" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.585568 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerName="init" Oct 08 08:05:06 crc kubenswrapper[4958]: E1008 08:05:06.585578 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22902302-0a41-43b7-8f38-35521559ce16" containerName="keystone-bootstrap" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.585585 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="22902302-0a41-43b7-8f38-35521559ce16" containerName="keystone-bootstrap" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.585747 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="22902302-0a41-43b7-8f38-35521559ce16" containerName="keystone-bootstrap" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.585757 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cb0c01-55d8-45da-ab84-4a423c424874" containerName="dnsmasq-dns" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.586340 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.589479 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.589688 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.589913 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.590391 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.590623 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qrfkz" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.590860 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.622485 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64cb5f589-zm624"] Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.750557 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-internal-tls-certs\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.750614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-fernet-keys\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.750865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-scripts\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.751044 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5p5q\" (UniqueName: \"kubernetes.io/projected/8e658785-9f48-4371-bbb5-2122dc1bebb3-kube-api-access-p5p5q\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.751089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-credential-keys\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.751151 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-combined-ca-bundle\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.751307 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-public-tls-certs\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.751594 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-config-data\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.852829 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-public-tls-certs\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.852895 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-config-data\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.852964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-internal-tls-certs\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.853184 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-fernet-keys\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.853341 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-scripts\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.853451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5p5q\" (UniqueName: \"kubernetes.io/projected/8e658785-9f48-4371-bbb5-2122dc1bebb3-kube-api-access-p5p5q\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.853469 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-credential-keys\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.853527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-combined-ca-bundle\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.859465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-internal-tls-certs\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.860131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-credential-keys\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.860494 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-fernet-keys\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.862567 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-config-data\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.864876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-scripts\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.866479 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-combined-ca-bundle\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.867405 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e658785-9f48-4371-bbb5-2122dc1bebb3-public-tls-certs\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.878582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5p5q\" (UniqueName: \"kubernetes.io/projected/8e658785-9f48-4371-bbb5-2122dc1bebb3-kube-api-access-p5p5q\") pod \"keystone-64cb5f589-zm624\" (UID: \"8e658785-9f48-4371-bbb5-2122dc1bebb3\") " pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:06 crc kubenswrapper[4958]: I1008 08:05:06.901838 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:07 crc kubenswrapper[4958]: I1008 08:05:07.444466 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64cb5f589-zm624"] Oct 08 08:05:07 crc kubenswrapper[4958]: I1008 08:05:07.486724 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64cb5f589-zm624" event={"ID":"8e658785-9f48-4371-bbb5-2122dc1bebb3","Type":"ContainerStarted","Data":"ad2a95f1bae98f4fef23da085a65a600d10f5d066d197db46b96f84cad61ff83"} Oct 08 08:05:08 crc kubenswrapper[4958]: I1008 08:05:08.502161 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64cb5f589-zm624" event={"ID":"8e658785-9f48-4371-bbb5-2122dc1bebb3","Type":"ContainerStarted","Data":"3ed75287104af311703911cbb91d9f9ce776690fbc5a46aec5b99457747f55be"} Oct 08 08:05:08 crc kubenswrapper[4958]: I1008 08:05:08.503038 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:08 crc kubenswrapper[4958]: I1008 08:05:08.541710 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-64cb5f589-zm624" podStartSLOduration=2.5416816129999997 podStartE2EDuration="2.541681613s" podCreationTimestamp="2025-10-08 08:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:05:08.526784889 +0000 UTC m=+5451.656477560" watchObservedRunningTime="2025-10-08 08:05:08.541681613 +0000 UTC m=+5451.671374254" Oct 08 08:05:19 crc kubenswrapper[4958]: I1008 08:05:19.588228 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:05:19 crc kubenswrapper[4958]: E1008 08:05:19.589477 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:05:33 crc kubenswrapper[4958]: I1008 08:05:33.577129 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:05:33 crc kubenswrapper[4958]: E1008 08:05:33.578322 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:05:38 crc kubenswrapper[4958]: I1008 08:05:38.391678 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-64cb5f589-zm624" Oct 08 08:05:38 crc kubenswrapper[4958]: I1008 08:05:38.507466 4958 scope.go:117] "RemoveContainer" containerID="13a1aa8c3cfa66b0d9456e96ce134885d148cb1268fecc0c65bc1a615fbe9f31" Oct 08 08:05:38 crc kubenswrapper[4958]: I1008 08:05:38.532987 4958 scope.go:117] "RemoveContainer" containerID="ed1b1ae0336a53d188e6777f1d14e0a1195bc42d0b29c87c412fa5db2167f079" Oct 08 08:05:38 crc kubenswrapper[4958]: I1008 08:05:38.574655 4958 scope.go:117] "RemoveContainer" containerID="7be82dc085cd120f7e7bdfaaf5faddc5f6901601634936958b99c7b31a8b0a56" Oct 08 08:05:38 crc kubenswrapper[4958]: I1008 08:05:38.636781 4958 scope.go:117] "RemoveContainer" containerID="ae411008c9b620916f8be4e004740b8186c0bad7e4e00784b032402cda9e4de5" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.393774 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.395504 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.398678 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5g2j8" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.399240 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.399597 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.405995 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.506191 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.506271 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rb6\" (UniqueName: \"kubernetes.io/projected/0035371c-8689-4d83-9b95-5869915a2b4f-kube-api-access-m8rb6\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.506646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config-secret\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.506780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.608421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config-secret\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.608483 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.608538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.608588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rb6\" (UniqueName: \"kubernetes.io/projected/0035371c-8689-4d83-9b95-5869915a2b4f-kube-api-access-m8rb6\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.609391 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.614032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.615389 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config-secret\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.625400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rb6\" (UniqueName: \"kubernetes.io/projected/0035371c-8689-4d83-9b95-5869915a2b4f-kube-api-access-m8rb6\") pod \"openstackclient\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " pod="openstack/openstackclient" Oct 08 08:05:43 crc kubenswrapper[4958]: I1008 08:05:43.733652 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:05:44 crc kubenswrapper[4958]: I1008 08:05:44.216905 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:05:44 crc kubenswrapper[4958]: I1008 08:05:44.923347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0035371c-8689-4d83-9b95-5869915a2b4f","Type":"ContainerStarted","Data":"c9f69f8a58c1425b7d5e8c4d4f3dbab2ca368ae53a13ac0e68a9204904f95eb7"} Oct 08 08:05:44 crc kubenswrapper[4958]: I1008 08:05:44.923905 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0035371c-8689-4d83-9b95-5869915a2b4f","Type":"ContainerStarted","Data":"3bd99e07aab8246a28ee1f0c4ea996176ed63052b0b7eb867c568d101c1eb9c2"} Oct 08 08:05:44 crc kubenswrapper[4958]: I1008 08:05:44.957383 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.957356834 podStartE2EDuration="1.957356834s" podCreationTimestamp="2025-10-08 08:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:05:44.949515972 +0000 UTC m=+5488.079208573" watchObservedRunningTime="2025-10-08 08:05:44.957356834 +0000 UTC m=+5488.087049445" Oct 08 08:05:47 crc kubenswrapper[4958]: I1008 08:05:47.587714 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:05:47 crc kubenswrapper[4958]: E1008 08:05:47.589537 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:06:00 crc kubenswrapper[4958]: I1008 08:06:00.576928 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:06:00 crc kubenswrapper[4958]: E1008 08:06:00.578097 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:06:14 crc kubenswrapper[4958]: I1008 08:06:14.576373 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:06:14 crc kubenswrapper[4958]: E1008 08:06:14.577531 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:06:26 crc kubenswrapper[4958]: I1008 08:06:26.577183 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:06:26 crc kubenswrapper[4958]: E1008 08:06:26.585987 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:06:37 crc kubenswrapper[4958]: I1008 08:06:37.589485 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:06:37 crc kubenswrapper[4958]: E1008 08:06:37.591147 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:06:38 crc kubenswrapper[4958]: I1008 08:06:38.721858 4958 scope.go:117] "RemoveContainer" containerID="83229e70edfbf194a7981ed7ecdfe2a130329b99136e358fe5f7cfa88801e99a" Oct 08 08:06:51 crc kubenswrapper[4958]: I1008 08:06:51.577358 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:06:51 crc kubenswrapper[4958]: E1008 08:06:51.578054 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:07:02 crc kubenswrapper[4958]: I1008 08:07:02.577264 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:07:02 crc kubenswrapper[4958]: E1008 08:07:02.578259 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:07:15 crc kubenswrapper[4958]: I1008 08:07:15.576924 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:07:15 crc kubenswrapper[4958]: E1008 08:07:15.578282 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:07:19 crc kubenswrapper[4958]: I1008 08:07:19.905055 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-wl4s2"] Oct 08 08:07:19 crc kubenswrapper[4958]: I1008 08:07:19.906607 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:19 crc kubenswrapper[4958]: I1008 08:07:19.919539 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wl4s2"] Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.037107 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhzq\" (UniqueName: \"kubernetes.io/projected/2c378af2-e833-4ade-a1c8-571b544a4ef2-kube-api-access-gqhzq\") pod \"barbican-db-create-wl4s2\" (UID: \"2c378af2-e833-4ade-a1c8-571b544a4ef2\") " pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.139034 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqhzq\" (UniqueName: \"kubernetes.io/projected/2c378af2-e833-4ade-a1c8-571b544a4ef2-kube-api-access-gqhzq\") pod \"barbican-db-create-wl4s2\" (UID: \"2c378af2-e833-4ade-a1c8-571b544a4ef2\") " pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.167970 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqhzq\" (UniqueName: \"kubernetes.io/projected/2c378af2-e833-4ade-a1c8-571b544a4ef2-kube-api-access-gqhzq\") pod \"barbican-db-create-wl4s2\" (UID: \"2c378af2-e833-4ade-a1c8-571b544a4ef2\") " pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.246447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.694180 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wl4s2"] Oct 08 08:07:20 crc kubenswrapper[4958]: W1008 08:07:20.706815 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c378af2_e833_4ade_a1c8_571b544a4ef2.slice/crio-fcff42908d1874ab4635502dca1d50151342694ac69f29f64c1740e2c7af2c51 WatchSource:0}: Error finding container fcff42908d1874ab4635502dca1d50151342694ac69f29f64c1740e2c7af2c51: Status 404 returned error can't find the container with id fcff42908d1874ab4635502dca1d50151342694ac69f29f64c1740e2c7af2c51 Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.992320 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wl4s2" event={"ID":"2c378af2-e833-4ade-a1c8-571b544a4ef2","Type":"ContainerStarted","Data":"1b902b7874c3e6f551ba898c38c294c446cc7bd5e04191a5fc50d9182e183e74"} Oct 08 08:07:20 crc kubenswrapper[4958]: I1008 08:07:20.992411 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wl4s2" event={"ID":"2c378af2-e833-4ade-a1c8-571b544a4ef2","Type":"ContainerStarted","Data":"fcff42908d1874ab4635502dca1d50151342694ac69f29f64c1740e2c7af2c51"} Oct 08 08:07:21 crc kubenswrapper[4958]: I1008 08:07:21.023837 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-wl4s2" podStartSLOduration=2.02360691 podStartE2EDuration="2.02360691s" podCreationTimestamp="2025-10-08 08:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:21.013085275 +0000 UTC m=+5584.142777886" watchObservedRunningTime="2025-10-08 08:07:21.02360691 +0000 UTC m=+5584.153299551" Oct 08 08:07:22 crc kubenswrapper[4958]: I1008 08:07:22.006500 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c378af2-e833-4ade-a1c8-571b544a4ef2" containerID="1b902b7874c3e6f551ba898c38c294c446cc7bd5e04191a5fc50d9182e183e74" exitCode=0 Oct 08 08:07:22 crc kubenswrapper[4958]: I1008 08:07:22.006580 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wl4s2" event={"ID":"2c378af2-e833-4ade-a1c8-571b544a4ef2","Type":"ContainerDied","Data":"1b902b7874c3e6f551ba898c38c294c446cc7bd5e04191a5fc50d9182e183e74"} Oct 08 08:07:23 crc kubenswrapper[4958]: I1008 08:07:23.378438 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:23 crc kubenswrapper[4958]: I1008 08:07:23.506570 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqhzq\" (UniqueName: \"kubernetes.io/projected/2c378af2-e833-4ade-a1c8-571b544a4ef2-kube-api-access-gqhzq\") pod \"2c378af2-e833-4ade-a1c8-571b544a4ef2\" (UID: \"2c378af2-e833-4ade-a1c8-571b544a4ef2\") " Oct 08 08:07:23 crc kubenswrapper[4958]: I1008 08:07:23.516280 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c378af2-e833-4ade-a1c8-571b544a4ef2-kube-api-access-gqhzq" (OuterVolumeSpecName: "kube-api-access-gqhzq") pod "2c378af2-e833-4ade-a1c8-571b544a4ef2" (UID: "2c378af2-e833-4ade-a1c8-571b544a4ef2"). InnerVolumeSpecName "kube-api-access-gqhzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:07:23 crc kubenswrapper[4958]: I1008 08:07:23.609108 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqhzq\" (UniqueName: \"kubernetes.io/projected/2c378af2-e833-4ade-a1c8-571b544a4ef2-kube-api-access-gqhzq\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:24 crc kubenswrapper[4958]: I1008 08:07:24.021398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wl4s2" event={"ID":"2c378af2-e833-4ade-a1c8-571b544a4ef2","Type":"ContainerDied","Data":"fcff42908d1874ab4635502dca1d50151342694ac69f29f64c1740e2c7af2c51"} Oct 08 08:07:24 crc kubenswrapper[4958]: I1008 08:07:24.021436 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcff42908d1874ab4635502dca1d50151342694ac69f29f64c1740e2c7af2c51" Oct 08 08:07:24 crc kubenswrapper[4958]: I1008 08:07:24.021448 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wl4s2" Oct 08 08:07:27 crc kubenswrapper[4958]: I1008 08:07:27.591359 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:07:27 crc kubenswrapper[4958]: E1008 08:07:27.591989 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:07:29 crc kubenswrapper[4958]: I1008 08:07:29.926129 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-52fd-account-create-h7bcj"] Oct 08 08:07:29 crc kubenswrapper[4958]: E1008 08:07:29.927230 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c378af2-e833-4ade-a1c8-571b544a4ef2" containerName="mariadb-database-create" Oct 08 08:07:29 crc kubenswrapper[4958]: I1008 08:07:29.927260 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c378af2-e833-4ade-a1c8-571b544a4ef2" containerName="mariadb-database-create" Oct 08 08:07:29 crc kubenswrapper[4958]: I1008 08:07:29.927621 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c378af2-e833-4ade-a1c8-571b544a4ef2" containerName="mariadb-database-create" Oct 08 08:07:29 crc kubenswrapper[4958]: I1008 08:07:29.928660 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:29 crc kubenswrapper[4958]: I1008 08:07:29.932124 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 08 08:07:29 crc kubenswrapper[4958]: I1008 08:07:29.932640 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-52fd-account-create-h7bcj"] Oct 08 08:07:30 crc kubenswrapper[4958]: I1008 08:07:30.033983 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrx7z\" (UniqueName: \"kubernetes.io/projected/486d7734-8fa0-4291-9ceb-c70f95467f5e-kube-api-access-jrx7z\") pod \"barbican-52fd-account-create-h7bcj\" (UID: \"486d7734-8fa0-4291-9ceb-c70f95467f5e\") " pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:30 crc kubenswrapper[4958]: I1008 08:07:30.143716 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrx7z\" (UniqueName: \"kubernetes.io/projected/486d7734-8fa0-4291-9ceb-c70f95467f5e-kube-api-access-jrx7z\") pod \"barbican-52fd-account-create-h7bcj\" (UID: \"486d7734-8fa0-4291-9ceb-c70f95467f5e\") " pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:30 crc kubenswrapper[4958]: I1008 08:07:30.174841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrx7z\" (UniqueName: \"kubernetes.io/projected/486d7734-8fa0-4291-9ceb-c70f95467f5e-kube-api-access-jrx7z\") pod \"barbican-52fd-account-create-h7bcj\" (UID: \"486d7734-8fa0-4291-9ceb-c70f95467f5e\") " pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:30 crc kubenswrapper[4958]: I1008 08:07:30.259737 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:30 crc kubenswrapper[4958]: I1008 08:07:30.576547 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-52fd-account-create-h7bcj"] Oct 08 08:07:31 crc kubenswrapper[4958]: I1008 08:07:31.100911 4958 generic.go:334] "Generic (PLEG): container finished" podID="486d7734-8fa0-4291-9ceb-c70f95467f5e" containerID="d7ee794a1cf80d22f4614ea9d3dab28733e1c80299641c3ed549f02039691c15" exitCode=0 Oct 08 08:07:31 crc kubenswrapper[4958]: I1008 08:07:31.101163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-52fd-account-create-h7bcj" event={"ID":"486d7734-8fa0-4291-9ceb-c70f95467f5e","Type":"ContainerDied","Data":"d7ee794a1cf80d22f4614ea9d3dab28733e1c80299641c3ed549f02039691c15"} Oct 08 08:07:31 crc kubenswrapper[4958]: I1008 08:07:31.101280 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-52fd-account-create-h7bcj" event={"ID":"486d7734-8fa0-4291-9ceb-c70f95467f5e","Type":"ContainerStarted","Data":"7d110779539358f7a9c4adf5caa2d25063c380f23c321a3e4f403cecbf08fd6b"} Oct 08 08:07:32 crc kubenswrapper[4958]: I1008 08:07:32.536940 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:32 crc kubenswrapper[4958]: I1008 08:07:32.690991 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrx7z\" (UniqueName: \"kubernetes.io/projected/486d7734-8fa0-4291-9ceb-c70f95467f5e-kube-api-access-jrx7z\") pod \"486d7734-8fa0-4291-9ceb-c70f95467f5e\" (UID: \"486d7734-8fa0-4291-9ceb-c70f95467f5e\") " Oct 08 08:07:32 crc kubenswrapper[4958]: I1008 08:07:32.699882 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d7734-8fa0-4291-9ceb-c70f95467f5e-kube-api-access-jrx7z" (OuterVolumeSpecName: "kube-api-access-jrx7z") pod "486d7734-8fa0-4291-9ceb-c70f95467f5e" (UID: "486d7734-8fa0-4291-9ceb-c70f95467f5e"). InnerVolumeSpecName "kube-api-access-jrx7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:07:32 crc kubenswrapper[4958]: I1008 08:07:32.792926 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrx7z\" (UniqueName: \"kubernetes.io/projected/486d7734-8fa0-4291-9ceb-c70f95467f5e-kube-api-access-jrx7z\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:33 crc kubenswrapper[4958]: I1008 08:07:33.118466 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-52fd-account-create-h7bcj" event={"ID":"486d7734-8fa0-4291-9ceb-c70f95467f5e","Type":"ContainerDied","Data":"7d110779539358f7a9c4adf5caa2d25063c380f23c321a3e4f403cecbf08fd6b"} Oct 08 08:07:33 crc kubenswrapper[4958]: I1008 08:07:33.118791 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d110779539358f7a9c4adf5caa2d25063c380f23c321a3e4f403cecbf08fd6b" Oct 08 08:07:33 crc kubenswrapper[4958]: I1008 08:07:33.118488 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-52fd-account-create-h7bcj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.018888 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w26mf"] Oct 08 08:07:35 crc kubenswrapper[4958]: E1008 08:07:35.019501 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d7734-8fa0-4291-9ceb-c70f95467f5e" containerName="mariadb-account-create" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.019522 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d7734-8fa0-4291-9ceb-c70f95467f5e" containerName="mariadb-account-create" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.020019 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="486d7734-8fa0-4291-9ceb-c70f95467f5e" containerName="mariadb-account-create" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.022072 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.030984 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w26mf"] Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.138320 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-utilities\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.138397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-catalog-content\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.138803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5dg\" (UniqueName: \"kubernetes.io/projected/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-kube-api-access-wb5dg\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.169797 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g6jkj"] Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.171479 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.173895 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6jhws" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.174163 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.177238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6jkj"] Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.240904 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-combined-ca-bundle\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.241290 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-db-sync-config-data\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.241361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5dg\" (UniqueName: \"kubernetes.io/projected/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-kube-api-access-wb5dg\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.241430 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxpj\" (UniqueName: \"kubernetes.io/projected/024fc63d-9b6c-4340-bdff-a77564c3e311-kube-api-access-mjxpj\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.241474 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-utilities\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.241499 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-catalog-content\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.242070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-catalog-content\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.242711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-utilities\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.274250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5dg\" (UniqueName: \"kubernetes.io/projected/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-kube-api-access-wb5dg\") pod \"community-operators-w26mf\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.342517 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjxpj\" (UniqueName: \"kubernetes.io/projected/024fc63d-9b6c-4340-bdff-a77564c3e311-kube-api-access-mjxpj\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.342630 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-combined-ca-bundle\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.342654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-db-sync-config-data\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.345743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-combined-ca-bundle\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.346771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.347342 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-db-sync-config-data\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.369133 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjxpj\" (UniqueName: \"kubernetes.io/projected/024fc63d-9b6c-4340-bdff-a77564c3e311-kube-api-access-mjxpj\") pod \"barbican-db-sync-g6jkj\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.489376 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.848985 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w26mf"] Oct 08 08:07:35 crc kubenswrapper[4958]: I1008 08:07:35.949113 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6jkj"] Oct 08 08:07:35 crc kubenswrapper[4958]: W1008 08:07:35.953149 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod024fc63d_9b6c_4340_bdff_a77564c3e311.slice/crio-1c0000f5a1d4f817b4962c8a5fb3c1093d033cac01b12999f081f4f3ba368779 WatchSource:0}: Error finding container 1c0000f5a1d4f817b4962c8a5fb3c1093d033cac01b12999f081f4f3ba368779: Status 404 returned error can't find the container with id 1c0000f5a1d4f817b4962c8a5fb3c1093d033cac01b12999f081f4f3ba368779 Oct 08 08:07:36 crc kubenswrapper[4958]: I1008 08:07:36.149594 4958 generic.go:334] "Generic (PLEG): container finished" podID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerID="7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915" exitCode=0 Oct 08 08:07:36 crc kubenswrapper[4958]: I1008 08:07:36.149716 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerDied","Data":"7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915"} Oct 08 08:07:36 crc kubenswrapper[4958]: I1008 08:07:36.151077 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerStarted","Data":"5744ac94afb9bde1135b0b59fe975b50c6c2b01edfa77e3e111d0aa55bb906ab"} Oct 08 08:07:36 crc kubenswrapper[4958]: I1008 08:07:36.153379 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6jkj" event={"ID":"024fc63d-9b6c-4340-bdff-a77564c3e311","Type":"ContainerStarted","Data":"39e7e9af11fbb8bf5c57151bd4afd0b942fb3d2b3a0282fea792099932cdfe7b"} Oct 08 08:07:36 crc kubenswrapper[4958]: I1008 08:07:36.153421 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6jkj" event={"ID":"024fc63d-9b6c-4340-bdff-a77564c3e311","Type":"ContainerStarted","Data":"1c0000f5a1d4f817b4962c8a5fb3c1093d033cac01b12999f081f4f3ba368779"} Oct 08 08:07:36 crc kubenswrapper[4958]: I1008 08:07:36.195430 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g6jkj" podStartSLOduration=1.195409171 podStartE2EDuration="1.195409171s" podCreationTimestamp="2025-10-08 08:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:36.18727648 +0000 UTC m=+5599.316969081" watchObservedRunningTime="2025-10-08 08:07:36.195409171 +0000 UTC m=+5599.325101772" Oct 08 08:07:37 crc kubenswrapper[4958]: I1008 08:07:37.170457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerStarted","Data":"95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b"} Oct 08 08:07:38 crc kubenswrapper[4958]: I1008 08:07:38.182354 4958 generic.go:334] "Generic (PLEG): container finished" podID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerID="95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b" exitCode=0 Oct 08 08:07:38 crc kubenswrapper[4958]: I1008 08:07:38.182492 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerDied","Data":"95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b"} Oct 08 08:07:38 crc kubenswrapper[4958]: I1008 08:07:38.186241 4958 generic.go:334] "Generic (PLEG): container finished" podID="024fc63d-9b6c-4340-bdff-a77564c3e311" containerID="39e7e9af11fbb8bf5c57151bd4afd0b942fb3d2b3a0282fea792099932cdfe7b" exitCode=0 Oct 08 08:07:38 crc kubenswrapper[4958]: I1008 08:07:38.186305 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6jkj" event={"ID":"024fc63d-9b6c-4340-bdff-a77564c3e311","Type":"ContainerDied","Data":"39e7e9af11fbb8bf5c57151bd4afd0b942fb3d2b3a0282fea792099932cdfe7b"} Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.201078 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerStarted","Data":"2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63"} Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.235022 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w26mf" podStartSLOduration=2.703730166 podStartE2EDuration="5.235002777s" podCreationTimestamp="2025-10-08 08:07:34 +0000 UTC" firstStartedPulling="2025-10-08 08:07:36.15220916 +0000 UTC m=+5599.281901771" lastFinishedPulling="2025-10-08 08:07:38.683481741 +0000 UTC m=+5601.813174382" observedRunningTime="2025-10-08 08:07:39.233699391 +0000 UTC m=+5602.363392002" watchObservedRunningTime="2025-10-08 08:07:39.235002777 +0000 UTC m=+5602.364695388" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.580258 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.733417 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-db-sync-config-data\") pod \"024fc63d-9b6c-4340-bdff-a77564c3e311\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.735318 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjxpj\" (UniqueName: \"kubernetes.io/projected/024fc63d-9b6c-4340-bdff-a77564c3e311-kube-api-access-mjxpj\") pod \"024fc63d-9b6c-4340-bdff-a77564c3e311\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.735493 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-combined-ca-bundle\") pod \"024fc63d-9b6c-4340-bdff-a77564c3e311\" (UID: \"024fc63d-9b6c-4340-bdff-a77564c3e311\") " Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.741356 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "024fc63d-9b6c-4340-bdff-a77564c3e311" (UID: "024fc63d-9b6c-4340-bdff-a77564c3e311"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.741531 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/024fc63d-9b6c-4340-bdff-a77564c3e311-kube-api-access-mjxpj" (OuterVolumeSpecName: "kube-api-access-mjxpj") pod "024fc63d-9b6c-4340-bdff-a77564c3e311" (UID: "024fc63d-9b6c-4340-bdff-a77564c3e311"). InnerVolumeSpecName "kube-api-access-mjxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.776649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "024fc63d-9b6c-4340-bdff-a77564c3e311" (UID: "024fc63d-9b6c-4340-bdff-a77564c3e311"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.839105 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.839159 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjxpj\" (UniqueName: \"kubernetes.io/projected/024fc63d-9b6c-4340-bdff-a77564c3e311-kube-api-access-mjxpj\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:39 crc kubenswrapper[4958]: I1008 08:07:39.839179 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/024fc63d-9b6c-4340-bdff-a77564c3e311-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.211283 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6jkj" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.211290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6jkj" event={"ID":"024fc63d-9b6c-4340-bdff-a77564c3e311","Type":"ContainerDied","Data":"1c0000f5a1d4f817b4962c8a5fb3c1093d033cac01b12999f081f4f3ba368779"} Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.211348 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0000f5a1d4f817b4962c8a5fb3c1093d033cac01b12999f081f4f3ba368779" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.511081 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7bf5f9d487-99dwt"] Oct 08 08:07:40 crc kubenswrapper[4958]: E1008 08:07:40.511541 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="024fc63d-9b6c-4340-bdff-a77564c3e311" containerName="barbican-db-sync" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.511558 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="024fc63d-9b6c-4340-bdff-a77564c3e311" containerName="barbican-db-sync" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.511754 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="024fc63d-9b6c-4340-bdff-a77564c3e311" containerName="barbican-db-sync" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.512856 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.519519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.519687 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6jhws" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.519774 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.538132 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b48b74b7d-8p6dz"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.546382 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.546587 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b48b74b7d-8p6dz"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.554346 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.567608 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bf5f9d487-99dwt"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.588006 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86656fcf55-b7clc"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.589631 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.644423 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86656fcf55-b7clc"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685352 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-dns-svc\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685395 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-config-data-custom\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b37c31c2-c6b4-462f-ba30-7431cacff0dc-logs\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685494 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dj9k\" (UniqueName: \"kubernetes.io/projected/267d0d72-2044-4c4e-81bd-2f555587acad-kube-api-access-6dj9k\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685509 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-nb\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-config-data\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685540 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77nw\" (UniqueName: \"kubernetes.io/projected/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-kube-api-access-z77nw\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685575 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-combined-ca-bundle\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685601 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-config-data\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685621 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-config\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-combined-ca-bundle\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685667 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7t6m\" (UniqueName: \"kubernetes.io/projected/b37c31c2-c6b4-462f-ba30-7431cacff0dc-kube-api-access-b7t6m\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685682 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-sb\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685726 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/267d0d72-2044-4c4e-81bd-2f555587acad-logs\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.685744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-config-data-custom\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.702217 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-54b75c7988-wn8qx"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.703903 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.706059 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.713833 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54b75c7988-wn8qx"] Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-dns-svc\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786711 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-config-data-custom\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786759 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b37c31c2-c6b4-462f-ba30-7431cacff0dc-logs\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786788 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dj9k\" (UniqueName: \"kubernetes.io/projected/267d0d72-2044-4c4e-81bd-2f555587acad-kube-api-access-6dj9k\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786803 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-nb\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786818 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-config-data\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z77nw\" (UniqueName: \"kubernetes.io/projected/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-kube-api-access-z77nw\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786865 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-combined-ca-bundle\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786884 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-config-data\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786904 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-config\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786927 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-combined-ca-bundle\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786962 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7t6m\" (UniqueName: \"kubernetes.io/projected/b37c31c2-c6b4-462f-ba30-7431cacff0dc-kube-api-access-b7t6m\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.786979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-sb\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.787023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/267d0d72-2044-4c4e-81bd-2f555587acad-logs\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.787040 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-config-data-custom\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.787536 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-dns-svc\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.788913 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/267d0d72-2044-4c4e-81bd-2f555587acad-logs\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.789216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b37c31c2-c6b4-462f-ba30-7431cacff0dc-logs\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.789431 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-sb\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.792033 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-config\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.792053 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-nb\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.792303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-config-data-custom\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.797177 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-combined-ca-bundle\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.797792 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267d0d72-2044-4c4e-81bd-2f555587acad-config-data\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.802607 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-config-data-custom\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.806080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7t6m\" (UniqueName: \"kubernetes.io/projected/b37c31c2-c6b4-462f-ba30-7431cacff0dc-kube-api-access-b7t6m\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.808156 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-config-data\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.808658 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37c31c2-c6b4-462f-ba30-7431cacff0dc-combined-ca-bundle\") pod \"barbican-worker-7bf5f9d487-99dwt\" (UID: \"b37c31c2-c6b4-462f-ba30-7431cacff0dc\") " pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.814490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dj9k\" (UniqueName: \"kubernetes.io/projected/267d0d72-2044-4c4e-81bd-2f555587acad-kube-api-access-6dj9k\") pod \"barbican-keystone-listener-6b48b74b7d-8p6dz\" (UID: \"267d0d72-2044-4c4e-81bd-2f555587acad\") " pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.814653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z77nw\" (UniqueName: \"kubernetes.io/projected/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-kube-api-access-z77nw\") pod \"dnsmasq-dns-86656fcf55-b7clc\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.841628 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bf5f9d487-99dwt" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.870531 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.889801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-combined-ca-bundle\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.889877 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-logs\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.889907 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2hw\" (UniqueName: \"kubernetes.io/projected/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-kube-api-access-gs2hw\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.890048 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data-custom\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.890073 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.931194 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.992049 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-combined-ca-bundle\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.992091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-logs\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.992140 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2hw\" (UniqueName: \"kubernetes.io/projected/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-kube-api-access-gs2hw\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.992247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data-custom\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.992293 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.994607 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-logs\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.996721 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:40 crc kubenswrapper[4958]: I1008 08:07:40.997287 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-combined-ca-bundle\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.008910 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2hw\" (UniqueName: \"kubernetes.io/projected/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-kube-api-access-gs2hw\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.021609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data-custom\") pod \"barbican-api-54b75c7988-wn8qx\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.031413 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.305406 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bf5f9d487-99dwt"] Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.449414 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b48b74b7d-8p6dz"] Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.456672 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86656fcf55-b7clc"] Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.576753 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:07:41 crc kubenswrapper[4958]: I1008 08:07:41.637881 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54b75c7988-wn8qx"] Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.228222 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bf5f9d487-99dwt" event={"ID":"b37c31c2-c6b4-462f-ba30-7431cacff0dc","Type":"ContainerStarted","Data":"76675adb9f2a15be2c43fe9fab51080b353d9af0a8db972529d80c118f91f70d"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.228696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bf5f9d487-99dwt" event={"ID":"b37c31c2-c6b4-462f-ba30-7431cacff0dc","Type":"ContainerStarted","Data":"35a1217b87a4b17a4bea2d52b7a4a9172093015817998d65b5b426814d92650e"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.228707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bf5f9d487-99dwt" event={"ID":"b37c31c2-c6b4-462f-ba30-7431cacff0dc","Type":"ContainerStarted","Data":"da3500f14c192d70280b19af3cb79024b85401be86f9df4f650b69a684cf5af6"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.234566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"99e2aedb896ded61b3f3ce708c02241cc82327087174ed41010302ce866d005a"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.237084 4958 generic.go:334] "Generic (PLEG): container finished" podID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerID="22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da" exitCode=0 Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.237145 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" event={"ID":"e1e2cbaf-2358-4004-a8d1-69843a18c2ec","Type":"ContainerDied","Data":"22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.237163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" event={"ID":"e1e2cbaf-2358-4004-a8d1-69843a18c2ec","Type":"ContainerStarted","Data":"160442ef7ac1b0e089e74cd16500141f8c2267790fa3f2eccce54517773143d9"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.241261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b75c7988-wn8qx" event={"ID":"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd","Type":"ContainerStarted","Data":"6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.241306 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b75c7988-wn8qx" event={"ID":"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd","Type":"ContainerStarted","Data":"f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.241316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b75c7988-wn8qx" event={"ID":"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd","Type":"ContainerStarted","Data":"94d31e87f8f15be04a5be407d0d4b1e682b613ae36236bdea722c5dc5885c0fc"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.241455 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.241474 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.245854 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7bf5f9d487-99dwt" podStartSLOduration=2.245836573 podStartE2EDuration="2.245836573s" podCreationTimestamp="2025-10-08 08:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:42.243932471 +0000 UTC m=+5605.373625062" watchObservedRunningTime="2025-10-08 08:07:42.245836573 +0000 UTC m=+5605.375529174" Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.253209 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" event={"ID":"267d0d72-2044-4c4e-81bd-2f555587acad","Type":"ContainerStarted","Data":"6686ce2cdddbc6a0077b2129fab772c4c9d1ba9daaffbd8524472e0277f90b08"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.253266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" event={"ID":"267d0d72-2044-4c4e-81bd-2f555587acad","Type":"ContainerStarted","Data":"4ebc9766000bf117b3e832bed5599bfc450692b9b56b436fc62ae5f8a9a8635c"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.253277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" event={"ID":"267d0d72-2044-4c4e-81bd-2f555587acad","Type":"ContainerStarted","Data":"bbe75f3f1fa713b4f1283457ef82523cb0946ed249dc7169c0bc7e741197f1b7"} Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.331725 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-54b75c7988-wn8qx" podStartSLOduration=2.3317088 podStartE2EDuration="2.3317088s" podCreationTimestamp="2025-10-08 08:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:42.308269895 +0000 UTC m=+5605.437962496" watchObservedRunningTime="2025-10-08 08:07:42.3317088 +0000 UTC m=+5605.461401401" Oct 08 08:07:42 crc kubenswrapper[4958]: I1008 08:07:42.356758 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b48b74b7d-8p6dz" podStartSLOduration=2.356742598 podStartE2EDuration="2.356742598s" podCreationTimestamp="2025-10-08 08:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:42.351367333 +0000 UTC m=+5605.481059934" watchObservedRunningTime="2025-10-08 08:07:42.356742598 +0000 UTC m=+5605.486435199" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.019467 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6987778d5b-jzqd7"] Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.021478 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.024971 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.033132 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.049808 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6987778d5b-jzqd7"] Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137255 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-internal-tls-certs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-config-data-custom\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137329 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-combined-ca-bundle\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678666be-bd10-482f-81e0-d809c4ce5862-logs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxwv\" (UniqueName: \"kubernetes.io/projected/678666be-bd10-482f-81e0-d809c4ce5862-kube-api-access-2mxwv\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137458 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-public-tls-certs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.137482 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-config-data\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239526 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-internal-tls-certs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-config-data-custom\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239615 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-combined-ca-bundle\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239639 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678666be-bd10-482f-81e0-d809c4ce5862-logs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxwv\" (UniqueName: \"kubernetes.io/projected/678666be-bd10-482f-81e0-d809c4ce5862-kube-api-access-2mxwv\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239719 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-public-tls-certs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.239753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-config-data\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.240204 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678666be-bd10-482f-81e0-d809c4ce5862-logs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.245884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-combined-ca-bundle\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.246046 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-config-data-custom\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.248923 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-public-tls-certs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.258880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-config-data\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.262561 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/678666be-bd10-482f-81e0-d809c4ce5862-internal-tls-certs\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.266410 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" event={"ID":"e1e2cbaf-2358-4004-a8d1-69843a18c2ec","Type":"ContainerStarted","Data":"373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9"} Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.266855 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.275197 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxwv\" (UniqueName: \"kubernetes.io/projected/678666be-bd10-482f-81e0-d809c4ce5862-kube-api-access-2mxwv\") pod \"barbican-api-6987778d5b-jzqd7\" (UID: \"678666be-bd10-482f-81e0-d809c4ce5862\") " pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.293418 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" podStartSLOduration=3.2933959 podStartE2EDuration="3.2933959s" podCreationTimestamp="2025-10-08 08:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:43.28640353 +0000 UTC m=+5606.416096191" watchObservedRunningTime="2025-10-08 08:07:43.2933959 +0000 UTC m=+5606.423088511" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.350259 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:43 crc kubenswrapper[4958]: I1008 08:07:43.862334 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6987778d5b-jzqd7"] Oct 08 08:07:44 crc kubenswrapper[4958]: I1008 08:07:44.276341 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6987778d5b-jzqd7" event={"ID":"678666be-bd10-482f-81e0-d809c4ce5862","Type":"ContainerStarted","Data":"da6901872214238d5293a700c871ca76acd7255eb4c889b685e75a43bf059f49"} Oct 08 08:07:44 crc kubenswrapper[4958]: I1008 08:07:44.276929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6987778d5b-jzqd7" event={"ID":"678666be-bd10-482f-81e0-d809c4ce5862","Type":"ContainerStarted","Data":"7c30240b7f1f5f084a768af5fccc6ac778ead69de30259646d23d08a2746a4a8"} Oct 08 08:07:44 crc kubenswrapper[4958]: I1008 08:07:44.277037 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6987778d5b-jzqd7" event={"ID":"678666be-bd10-482f-81e0-d809c4ce5862","Type":"ContainerStarted","Data":"f18124f67013be3992b2241cf85e0c3ec054a9b34c14aa61951dd2d1cc23c9ef"} Oct 08 08:07:44 crc kubenswrapper[4958]: I1008 08:07:44.277068 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:44 crc kubenswrapper[4958]: I1008 08:07:44.277085 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:44 crc kubenswrapper[4958]: I1008 08:07:44.296929 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6987778d5b-jzqd7" podStartSLOduration=2.296914943 podStartE2EDuration="2.296914943s" podCreationTimestamp="2025-10-08 08:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:07:44.294569379 +0000 UTC m=+5607.424261980" watchObservedRunningTime="2025-10-08 08:07:44.296914943 +0000 UTC m=+5607.426607534" Oct 08 08:07:45 crc kubenswrapper[4958]: I1008 08:07:45.348659 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:45 crc kubenswrapper[4958]: I1008 08:07:45.348745 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:45 crc kubenswrapper[4958]: I1008 08:07:45.425346 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:46 crc kubenswrapper[4958]: I1008 08:07:46.353565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:46 crc kubenswrapper[4958]: I1008 08:07:46.425671 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w26mf"] Oct 08 08:07:47 crc kubenswrapper[4958]: I1008 08:07:47.375421 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.321103 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w26mf" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="registry-server" containerID="cri-o://2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63" gracePeriod=2 Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.704821 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.881807 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.962237 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5dg\" (UniqueName: \"kubernetes.io/projected/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-kube-api-access-wb5dg\") pod \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.962433 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-catalog-content\") pod \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.962608 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-utilities\") pod \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\" (UID: \"1eafd6ec-7218-4b0e-b9c0-acbb59438d34\") " Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.963529 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-utilities" (OuterVolumeSpecName: "utilities") pod "1eafd6ec-7218-4b0e-b9c0-acbb59438d34" (UID: "1eafd6ec-7218-4b0e-b9c0-acbb59438d34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:07:48 crc kubenswrapper[4958]: I1008 08:07:48.971212 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-kube-api-access-wb5dg" (OuterVolumeSpecName: "kube-api-access-wb5dg") pod "1eafd6ec-7218-4b0e-b9c0-acbb59438d34" (UID: "1eafd6ec-7218-4b0e-b9c0-acbb59438d34"). InnerVolumeSpecName "kube-api-access-wb5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.019521 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1eafd6ec-7218-4b0e-b9c0-acbb59438d34" (UID: "1eafd6ec-7218-4b0e-b9c0-acbb59438d34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.064909 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5dg\" (UniqueName: \"kubernetes.io/projected/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-kube-api-access-wb5dg\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.064961 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.064972 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafd6ec-7218-4b0e-b9c0-acbb59438d34-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.365095 4958 generic.go:334] "Generic (PLEG): container finished" podID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerID="2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63" exitCode=0 Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.365375 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w26mf" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.365447 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerDied","Data":"2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63"} Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.365824 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w26mf" event={"ID":"1eafd6ec-7218-4b0e-b9c0-acbb59438d34","Type":"ContainerDied","Data":"5744ac94afb9bde1135b0b59fe975b50c6c2b01edfa77e3e111d0aa55bb906ab"} Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.365875 4958 scope.go:117] "RemoveContainer" containerID="2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.418705 4958 scope.go:117] "RemoveContainer" containerID="95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.451359 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w26mf"] Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.468359 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w26mf"] Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.479286 4958 scope.go:117] "RemoveContainer" containerID="7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.537403 4958 scope.go:117] "RemoveContainer" containerID="2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63" Oct 08 08:07:49 crc kubenswrapper[4958]: E1008 08:07:49.538120 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63\": container with ID starting with 2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63 not found: ID does not exist" containerID="2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.538169 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63"} err="failed to get container status \"2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63\": rpc error: code = NotFound desc = could not find container \"2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63\": container with ID starting with 2797d8f7e30b26ec1b1f43035e4bc681bd5e3f067797a1fcc3a0b2d5a6945e63 not found: ID does not exist" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.538206 4958 scope.go:117] "RemoveContainer" containerID="95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b" Oct 08 08:07:49 crc kubenswrapper[4958]: E1008 08:07:49.538725 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b\": container with ID starting with 95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b not found: ID does not exist" containerID="95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.538796 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b"} err="failed to get container status \"95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b\": rpc error: code = NotFound desc = could not find container \"95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b\": container with ID starting with 95d83abefbfca341d81b01eafc3fb26042520a66fe782288efcc52ae9c32a64b not found: ID does not exist" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.538842 4958 scope.go:117] "RemoveContainer" containerID="7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915" Oct 08 08:07:49 crc kubenswrapper[4958]: E1008 08:07:49.539442 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915\": container with ID starting with 7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915 not found: ID does not exist" containerID="7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.539481 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915"} err="failed to get container status \"7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915\": rpc error: code = NotFound desc = could not find container \"7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915\": container with ID starting with 7b5ce803f16425c291a7c040c76b7e673acc204fd4696332a683686bd9b8c915 not found: ID does not exist" Oct 08 08:07:49 crc kubenswrapper[4958]: I1008 08:07:49.588776 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" path="/var/lib/kubelet/pods/1eafd6ec-7218-4b0e-b9c0-acbb59438d34/volumes" Oct 08 08:07:50 crc kubenswrapper[4958]: I1008 08:07:50.934117 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.015281 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6849ffbbb9-5hd2h"] Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.016762 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" containerName="dnsmasq-dns" containerID="cri-o://611ea312a9b989057e7b81c78f2917572f1d374e83d13f917c5b841cbc867fc9" gracePeriod=10 Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.384513 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1047855-329a-4c72-84b5-9a689472bcc6" containerID="611ea312a9b989057e7b81c78f2917572f1d374e83d13f917c5b841cbc867fc9" exitCode=0 Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.384767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" event={"ID":"d1047855-329a-4c72-84b5-9a689472bcc6","Type":"ContainerDied","Data":"611ea312a9b989057e7b81c78f2917572f1d374e83d13f917c5b841cbc867fc9"} Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.547485 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.616585 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-config\") pod \"d1047855-329a-4c72-84b5-9a689472bcc6\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.616648 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-sb\") pod \"d1047855-329a-4c72-84b5-9a689472bcc6\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.616822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-dns-svc\") pod \"d1047855-329a-4c72-84b5-9a689472bcc6\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.616971 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tcqd\" (UniqueName: \"kubernetes.io/projected/d1047855-329a-4c72-84b5-9a689472bcc6-kube-api-access-9tcqd\") pod \"d1047855-329a-4c72-84b5-9a689472bcc6\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.617028 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-nb\") pod \"d1047855-329a-4c72-84b5-9a689472bcc6\" (UID: \"d1047855-329a-4c72-84b5-9a689472bcc6\") " Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.627437 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1047855-329a-4c72-84b5-9a689472bcc6-kube-api-access-9tcqd" (OuterVolumeSpecName: "kube-api-access-9tcqd") pod "d1047855-329a-4c72-84b5-9a689472bcc6" (UID: "d1047855-329a-4c72-84b5-9a689472bcc6"). InnerVolumeSpecName "kube-api-access-9tcqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.660826 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-config" (OuterVolumeSpecName: "config") pod "d1047855-329a-4c72-84b5-9a689472bcc6" (UID: "d1047855-329a-4c72-84b5-9a689472bcc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.664512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1047855-329a-4c72-84b5-9a689472bcc6" (UID: "d1047855-329a-4c72-84b5-9a689472bcc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.664567 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1047855-329a-4c72-84b5-9a689472bcc6" (UID: "d1047855-329a-4c72-84b5-9a689472bcc6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.668324 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1047855-329a-4c72-84b5-9a689472bcc6" (UID: "d1047855-329a-4c72-84b5-9a689472bcc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.719725 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tcqd\" (UniqueName: \"kubernetes.io/projected/d1047855-329a-4c72-84b5-9a689472bcc6-kube-api-access-9tcqd\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.719765 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.719777 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.719786 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:51 crc kubenswrapper[4958]: I1008 08:07:51.719797 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1047855-329a-4c72-84b5-9a689472bcc6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:52 crc kubenswrapper[4958]: I1008 08:07:52.400210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" event={"ID":"d1047855-329a-4c72-84b5-9a689472bcc6","Type":"ContainerDied","Data":"0d69ef7b3f91be9f5d60014ab984746e785fbc09d7a5572f54ab45e52628b963"} Oct 08 08:07:52 crc kubenswrapper[4958]: I1008 08:07:52.400335 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6849ffbbb9-5hd2h" Oct 08 08:07:52 crc kubenswrapper[4958]: I1008 08:07:52.400612 4958 scope.go:117] "RemoveContainer" containerID="611ea312a9b989057e7b81c78f2917572f1d374e83d13f917c5b841cbc867fc9" Oct 08 08:07:52 crc kubenswrapper[4958]: I1008 08:07:52.450327 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6849ffbbb9-5hd2h"] Oct 08 08:07:52 crc kubenswrapper[4958]: I1008 08:07:52.452096 4958 scope.go:117] "RemoveContainer" containerID="e06e014c2d9acd4aeeb7790e4862890ec0678829fab5f3d13af4c49264109db3" Oct 08 08:07:52 crc kubenswrapper[4958]: I1008 08:07:52.459403 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6849ffbbb9-5hd2h"] Oct 08 08:07:53 crc kubenswrapper[4958]: I1008 08:07:53.594616 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" path="/var/lib/kubelet/pods/d1047855-329a-4c72-84b5-9a689472bcc6/volumes" Oct 08 08:07:54 crc kubenswrapper[4958]: I1008 08:07:54.682159 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:54 crc kubenswrapper[4958]: I1008 08:07:54.729718 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6987778d5b-jzqd7" Oct 08 08:07:54 crc kubenswrapper[4958]: I1008 08:07:54.797303 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-54b75c7988-wn8qx"] Oct 08 08:07:54 crc kubenswrapper[4958]: I1008 08:07:54.797579 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-54b75c7988-wn8qx" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api-log" containerID="cri-o://f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd" gracePeriod=30 Oct 08 08:07:54 crc kubenswrapper[4958]: I1008 08:07:54.797755 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-54b75c7988-wn8qx" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api" containerID="cri-o://6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b" gracePeriod=30 Oct 08 08:07:55 crc kubenswrapper[4958]: I1008 08:07:55.454494 4958 generic.go:334] "Generic (PLEG): container finished" podID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerID="f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd" exitCode=143 Oct 08 08:07:55 crc kubenswrapper[4958]: I1008 08:07:55.454622 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b75c7988-wn8qx" event={"ID":"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd","Type":"ContainerDied","Data":"f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd"} Oct 08 08:07:57 crc kubenswrapper[4958]: I1008 08:07:57.963190 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-54b75c7988-wn8qx" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.36:9311/healthcheck\": read tcp 10.217.0.2:53794->10.217.1.36:9311: read: connection reset by peer" Oct 08 08:07:57 crc kubenswrapper[4958]: I1008 08:07:57.963223 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-54b75c7988-wn8qx" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.36:9311/healthcheck\": read tcp 10.217.0.2:53798->10.217.1.36:9311: read: connection reset by peer" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.386216 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.485672 4958 generic.go:334] "Generic (PLEG): container finished" podID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerID="6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b" exitCode=0 Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.485737 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54b75c7988-wn8qx" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.485728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b75c7988-wn8qx" event={"ID":"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd","Type":"ContainerDied","Data":"6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b"} Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.486208 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54b75c7988-wn8qx" event={"ID":"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd","Type":"ContainerDied","Data":"94d31e87f8f15be04a5be407d0d4b1e682b613ae36236bdea722c5dc5885c0fc"} Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.486253 4958 scope.go:117] "RemoveContainer" containerID="6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.514122 4958 scope.go:117] "RemoveContainer" containerID="f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.535586 4958 scope.go:117] "RemoveContainer" containerID="6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b" Oct 08 08:07:58 crc kubenswrapper[4958]: E1008 08:07:58.535976 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b\": container with ID starting with 6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b not found: ID does not exist" containerID="6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.536029 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b"} err="failed to get container status \"6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b\": rpc error: code = NotFound desc = could not find container \"6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b\": container with ID starting with 6c9e99c62db897451872832f80ea59edd4c6a3b6cf5173b41386aff0ea45a32b not found: ID does not exist" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.536066 4958 scope.go:117] "RemoveContainer" containerID="f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd" Oct 08 08:07:58 crc kubenswrapper[4958]: E1008 08:07:58.536480 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd\": container with ID starting with f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd not found: ID does not exist" containerID="f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.536518 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd"} err="failed to get container status \"f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd\": rpc error: code = NotFound desc = could not find container \"f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd\": container with ID starting with f2dce0ee5d5c79d123c5c2c47a0f897d57f5a38aac7193246ced13fa6ce464dd not found: ID does not exist" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.554671 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-combined-ca-bundle\") pod \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.554974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data\") pod \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.555029 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs2hw\" (UniqueName: \"kubernetes.io/projected/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-kube-api-access-gs2hw\") pod \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.555097 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-logs\") pod \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.555172 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data-custom\") pod \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\" (UID: \"4b527fa1-7cfa-4f43-b533-0f7206e6c0cd\") " Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.555725 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-logs" (OuterVolumeSpecName: "logs") pod "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" (UID: "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.561631 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-kube-api-access-gs2hw" (OuterVolumeSpecName: "kube-api-access-gs2hw") pod "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" (UID: "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd"). InnerVolumeSpecName "kube-api-access-gs2hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.561988 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" (UID: "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.583592 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" (UID: "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.633187 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data" (OuterVolumeSpecName: "config-data") pod "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" (UID: "4b527fa1-7cfa-4f43-b533-0f7206e6c0cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.660426 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.660480 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs2hw\" (UniqueName: \"kubernetes.io/projected/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-kube-api-access-gs2hw\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.660506 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.660528 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.660551 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.847099 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-54b75c7988-wn8qx"] Oct 08 08:07:58 crc kubenswrapper[4958]: I1008 08:07:58.862863 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-54b75c7988-wn8qx"] Oct 08 08:07:59 crc kubenswrapper[4958]: I1008 08:07:59.601437 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" path="/var/lib/kubelet/pods/4b527fa1-7cfa-4f43-b533-0f7206e6c0cd/volumes" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.614737 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hpm46"] Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615433 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" containerName="init" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615449 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" containerName="init" Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615465 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="registry-server" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615471 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="registry-server" Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615488 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="extract-content" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615494 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="extract-content" Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615503 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="extract-utilities" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615508 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="extract-utilities" Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615518 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" containerName="dnsmasq-dns" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615524 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" containerName="dnsmasq-dns" Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615535 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api-log" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615541 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api-log" Oct 08 08:08:06 crc kubenswrapper[4958]: E1008 08:08:06.615551 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615557 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615753 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api-log" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615769 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b527fa1-7cfa-4f43-b533-0f7206e6c0cd" containerName="barbican-api" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615780 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1047855-329a-4c72-84b5-9a689472bcc6" containerName="dnsmasq-dns" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.615799 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eafd6ec-7218-4b0e-b9c0-acbb59438d34" containerName="registry-server" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.616446 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.627558 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hpm46"] Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.726763 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpgt\" (UniqueName: \"kubernetes.io/projected/8cf7b6cf-6c29-48f9-b221-572a5dfd3411-kube-api-access-skpgt\") pod \"neutron-db-create-hpm46\" (UID: \"8cf7b6cf-6c29-48f9-b221-572a5dfd3411\") " pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.828669 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpgt\" (UniqueName: \"kubernetes.io/projected/8cf7b6cf-6c29-48f9-b221-572a5dfd3411-kube-api-access-skpgt\") pod \"neutron-db-create-hpm46\" (UID: \"8cf7b6cf-6c29-48f9-b221-572a5dfd3411\") " pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.859324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpgt\" (UniqueName: \"kubernetes.io/projected/8cf7b6cf-6c29-48f9-b221-572a5dfd3411-kube-api-access-skpgt\") pod \"neutron-db-create-hpm46\" (UID: \"8cf7b6cf-6c29-48f9-b221-572a5dfd3411\") " pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:06 crc kubenswrapper[4958]: I1008 08:08:06.951017 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:07 crc kubenswrapper[4958]: I1008 08:08:07.433257 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hpm46"] Oct 08 08:08:07 crc kubenswrapper[4958]: I1008 08:08:07.605071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hpm46" event={"ID":"8cf7b6cf-6c29-48f9-b221-572a5dfd3411","Type":"ContainerStarted","Data":"d20593902791de04bb5ece230c46ae929109bfd93b59254c222ecd94ea3e853f"} Oct 08 08:08:08 crc kubenswrapper[4958]: I1008 08:08:08.599491 4958 generic.go:334] "Generic (PLEG): container finished" podID="8cf7b6cf-6c29-48f9-b221-572a5dfd3411" containerID="e9294a77a0841f8fb76d29ba84f0382c66a834df8da5b48c39e6c842a9a2eba8" exitCode=0 Oct 08 08:08:08 crc kubenswrapper[4958]: I1008 08:08:08.599550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hpm46" event={"ID":"8cf7b6cf-6c29-48f9-b221-572a5dfd3411","Type":"ContainerDied","Data":"e9294a77a0841f8fb76d29ba84f0382c66a834df8da5b48c39e6c842a9a2eba8"} Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.035101 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.198498 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skpgt\" (UniqueName: \"kubernetes.io/projected/8cf7b6cf-6c29-48f9-b221-572a5dfd3411-kube-api-access-skpgt\") pod \"8cf7b6cf-6c29-48f9-b221-572a5dfd3411\" (UID: \"8cf7b6cf-6c29-48f9-b221-572a5dfd3411\") " Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.208075 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf7b6cf-6c29-48f9-b221-572a5dfd3411-kube-api-access-skpgt" (OuterVolumeSpecName: "kube-api-access-skpgt") pod "8cf7b6cf-6c29-48f9-b221-572a5dfd3411" (UID: "8cf7b6cf-6c29-48f9-b221-572a5dfd3411"). InnerVolumeSpecName "kube-api-access-skpgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.301182 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skpgt\" (UniqueName: \"kubernetes.io/projected/8cf7b6cf-6c29-48f9-b221-572a5dfd3411-kube-api-access-skpgt\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.628181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hpm46" event={"ID":"8cf7b6cf-6c29-48f9-b221-572a5dfd3411","Type":"ContainerDied","Data":"d20593902791de04bb5ece230c46ae929109bfd93b59254c222ecd94ea3e853f"} Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.628231 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d20593902791de04bb5ece230c46ae929109bfd93b59254c222ecd94ea3e853f" Oct 08 08:08:10 crc kubenswrapper[4958]: I1008 08:08:10.628275 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hpm46" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.731919 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-180c-account-create-tpz8x"] Oct 08 08:08:16 crc kubenswrapper[4958]: E1008 08:08:16.733692 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf7b6cf-6c29-48f9-b221-572a5dfd3411" containerName="mariadb-database-create" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.733709 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf7b6cf-6c29-48f9-b221-572a5dfd3411" containerName="mariadb-database-create" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.733900 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf7b6cf-6c29-48f9-b221-572a5dfd3411" containerName="mariadb-database-create" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.734908 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.740821 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.748587 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-180c-account-create-tpz8x"] Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.840045 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8qw\" (UniqueName: \"kubernetes.io/projected/f8e8bf5d-380f-471d-8a08-85b41730a075-kube-api-access-8s8qw\") pod \"neutron-180c-account-create-tpz8x\" (UID: \"f8e8bf5d-380f-471d-8a08-85b41730a075\") " pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.942515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8qw\" (UniqueName: \"kubernetes.io/projected/f8e8bf5d-380f-471d-8a08-85b41730a075-kube-api-access-8s8qw\") pod \"neutron-180c-account-create-tpz8x\" (UID: \"f8e8bf5d-380f-471d-8a08-85b41730a075\") " pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:16 crc kubenswrapper[4958]: I1008 08:08:16.976442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8qw\" (UniqueName: \"kubernetes.io/projected/f8e8bf5d-380f-471d-8a08-85b41730a075-kube-api-access-8s8qw\") pod \"neutron-180c-account-create-tpz8x\" (UID: \"f8e8bf5d-380f-471d-8a08-85b41730a075\") " pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:17 crc kubenswrapper[4958]: I1008 08:08:17.071499 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:17 crc kubenswrapper[4958]: I1008 08:08:17.531726 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-180c-account-create-tpz8x"] Oct 08 08:08:17 crc kubenswrapper[4958]: I1008 08:08:17.707697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-180c-account-create-tpz8x" event={"ID":"f8e8bf5d-380f-471d-8a08-85b41730a075","Type":"ContainerStarted","Data":"b046bdb69f5c2539f03e57f84eab7731f2de107d70db778999d44386a95556e0"} Oct 08 08:08:18 crc kubenswrapper[4958]: I1008 08:08:18.719691 4958 generic.go:334] "Generic (PLEG): container finished" podID="f8e8bf5d-380f-471d-8a08-85b41730a075" containerID="36c0053045ab46b874c5d29275df331798910d25638fe62a43a11775961e571a" exitCode=0 Oct 08 08:08:18 crc kubenswrapper[4958]: I1008 08:08:18.719799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-180c-account-create-tpz8x" event={"ID":"f8e8bf5d-380f-471d-8a08-85b41730a075","Type":"ContainerDied","Data":"36c0053045ab46b874c5d29275df331798910d25638fe62a43a11775961e571a"} Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.177366 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.362822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s8qw\" (UniqueName: \"kubernetes.io/projected/f8e8bf5d-380f-471d-8a08-85b41730a075-kube-api-access-8s8qw\") pod \"f8e8bf5d-380f-471d-8a08-85b41730a075\" (UID: \"f8e8bf5d-380f-471d-8a08-85b41730a075\") " Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.369174 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e8bf5d-380f-471d-8a08-85b41730a075-kube-api-access-8s8qw" (OuterVolumeSpecName: "kube-api-access-8s8qw") pod "f8e8bf5d-380f-471d-8a08-85b41730a075" (UID: "f8e8bf5d-380f-471d-8a08-85b41730a075"). InnerVolumeSpecName "kube-api-access-8s8qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.465918 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s8qw\" (UniqueName: \"kubernetes.io/projected/f8e8bf5d-380f-471d-8a08-85b41730a075-kube-api-access-8s8qw\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.742176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-180c-account-create-tpz8x" event={"ID":"f8e8bf5d-380f-471d-8a08-85b41730a075","Type":"ContainerDied","Data":"b046bdb69f5c2539f03e57f84eab7731f2de107d70db778999d44386a95556e0"} Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.742293 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-180c-account-create-tpz8x" Oct 08 08:08:20 crc kubenswrapper[4958]: I1008 08:08:20.742263 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b046bdb69f5c2539f03e57f84eab7731f2de107d70db778999d44386a95556e0" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.987897 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-57cqr"] Oct 08 08:08:21 crc kubenswrapper[4958]: E1008 08:08:21.988469 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e8bf5d-380f-471d-8a08-85b41730a075" containerName="mariadb-account-create" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.988493 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e8bf5d-380f-471d-8a08-85b41730a075" containerName="mariadb-account-create" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.988800 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e8bf5d-380f-471d-8a08-85b41730a075" containerName="mariadb-account-create" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.989756 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.992048 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.992341 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m9jjq" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.992472 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 08:08:21 crc kubenswrapper[4958]: I1008 08:08:21.999741 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-57cqr"] Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.095657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-config\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.096133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-combined-ca-bundle\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.096293 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrt9q\" (UniqueName: \"kubernetes.io/projected/3ed71f6e-5973-44b5-ab2b-34be01d43eef-kube-api-access-nrt9q\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.198528 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrt9q\" (UniqueName: \"kubernetes.io/projected/3ed71f6e-5973-44b5-ab2b-34be01d43eef-kube-api-access-nrt9q\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.198702 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-config\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.198725 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-combined-ca-bundle\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.205799 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-combined-ca-bundle\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.205931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-config\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.228375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrt9q\" (UniqueName: \"kubernetes.io/projected/3ed71f6e-5973-44b5-ab2b-34be01d43eef-kube-api-access-nrt9q\") pod \"neutron-db-sync-57cqr\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.314825 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:22 crc kubenswrapper[4958]: I1008 08:08:22.834229 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-57cqr"] Oct 08 08:08:23 crc kubenswrapper[4958]: I1008 08:08:23.771156 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-57cqr" event={"ID":"3ed71f6e-5973-44b5-ab2b-34be01d43eef","Type":"ContainerStarted","Data":"cdadd2ccafa80ae4a5c4e3e827f7319432b72cc9b9eaa25a4eaa51377a6b14c1"} Oct 08 08:08:23 crc kubenswrapper[4958]: I1008 08:08:23.771610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-57cqr" event={"ID":"3ed71f6e-5973-44b5-ab2b-34be01d43eef","Type":"ContainerStarted","Data":"baf263e0d1ff2e1b85023966cfb28f9bbf20e002808a4423ad038770d554f6e7"} Oct 08 08:08:23 crc kubenswrapper[4958]: I1008 08:08:23.802146 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-57cqr" podStartSLOduration=2.802115416 podStartE2EDuration="2.802115416s" podCreationTimestamp="2025-10-08 08:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:08:23.795095185 +0000 UTC m=+5646.924787866" watchObservedRunningTime="2025-10-08 08:08:23.802115416 +0000 UTC m=+5646.931808047" Oct 08 08:08:27 crc kubenswrapper[4958]: I1008 08:08:27.812725 4958 generic.go:334] "Generic (PLEG): container finished" podID="3ed71f6e-5973-44b5-ab2b-34be01d43eef" containerID="cdadd2ccafa80ae4a5c4e3e827f7319432b72cc9b9eaa25a4eaa51377a6b14c1" exitCode=0 Oct 08 08:08:27 crc kubenswrapper[4958]: I1008 08:08:27.812867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-57cqr" event={"ID":"3ed71f6e-5973-44b5-ab2b-34be01d43eef","Type":"ContainerDied","Data":"cdadd2ccafa80ae4a5c4e3e827f7319432b72cc9b9eaa25a4eaa51377a6b14c1"} Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.250002 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.437751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrt9q\" (UniqueName: \"kubernetes.io/projected/3ed71f6e-5973-44b5-ab2b-34be01d43eef-kube-api-access-nrt9q\") pod \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.438103 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-config\") pod \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.438347 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-combined-ca-bundle\") pod \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\" (UID: \"3ed71f6e-5973-44b5-ab2b-34be01d43eef\") " Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.445584 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed71f6e-5973-44b5-ab2b-34be01d43eef-kube-api-access-nrt9q" (OuterVolumeSpecName: "kube-api-access-nrt9q") pod "3ed71f6e-5973-44b5-ab2b-34be01d43eef" (UID: "3ed71f6e-5973-44b5-ab2b-34be01d43eef"). InnerVolumeSpecName "kube-api-access-nrt9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.485476 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ed71f6e-5973-44b5-ab2b-34be01d43eef" (UID: "3ed71f6e-5973-44b5-ab2b-34be01d43eef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.486193 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-config" (OuterVolumeSpecName: "config") pod "3ed71f6e-5973-44b5-ab2b-34be01d43eef" (UID: "3ed71f6e-5973-44b5-ab2b-34be01d43eef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.540455 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.540541 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed71f6e-5973-44b5-ab2b-34be01d43eef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.540574 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrt9q\" (UniqueName: \"kubernetes.io/projected/3ed71f6e-5973-44b5-ab2b-34be01d43eef-kube-api-access-nrt9q\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.837312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-57cqr" event={"ID":"3ed71f6e-5973-44b5-ab2b-34be01d43eef","Type":"ContainerDied","Data":"baf263e0d1ff2e1b85023966cfb28f9bbf20e002808a4423ad038770d554f6e7"} Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.837698 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf263e0d1ff2e1b85023966cfb28f9bbf20e002808a4423ad038770d554f6e7" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.837419 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-57cqr" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.991421 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-87d5d4c4c-cl4tc"] Oct 08 08:08:29 crc kubenswrapper[4958]: E1008 08:08:29.991925 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed71f6e-5973-44b5-ab2b-34be01d43eef" containerName="neutron-db-sync" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.991969 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed71f6e-5973-44b5-ab2b-34be01d43eef" containerName="neutron-db-sync" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.992258 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed71f6e-5973-44b5-ab2b-34be01d43eef" containerName="neutron-db-sync" Oct 08 08:08:29 crc kubenswrapper[4958]: I1008 08:08:29.994028 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.004742 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87d5d4c4c-cl4tc"] Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.059807 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9bdc48756-ghfb4"] Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.061165 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.069623 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.069802 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.069828 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-m9jjq" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.069981 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.071438 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9bdc48756-ghfb4"] Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.152681 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-config\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.152731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjm85\" (UniqueName: \"kubernetes.io/projected/27d27214-48f5-454b-9d3f-dce07dea05bd-kube-api-access-cjm85\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.152800 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-dns-svc\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.152831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-nb\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.152984 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-sb\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254233 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-httpd-config\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254307 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-config\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254343 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjm85\" (UniqueName: \"kubernetes.io/projected/27d27214-48f5-454b-9d3f-dce07dea05bd-kube-api-access-cjm85\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-ovndb-tls-certs\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-dns-svc\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254421 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-config\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-nb\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254478 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt62x\" (UniqueName: \"kubernetes.io/projected/7b120d24-1a2c-4c38-a2c4-ab6890064be4-kube-api-access-pt62x\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254498 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-combined-ca-bundle\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.254522 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-sb\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.255388 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-dns-svc\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.255595 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-nb\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.255648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-sb\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.255767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-config\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.276714 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjm85\" (UniqueName: \"kubernetes.io/projected/27d27214-48f5-454b-9d3f-dce07dea05bd-kube-api-access-cjm85\") pod \"dnsmasq-dns-87d5d4c4c-cl4tc\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.313618 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.356064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt62x\" (UniqueName: \"kubernetes.io/projected/7b120d24-1a2c-4c38-a2c4-ab6890064be4-kube-api-access-pt62x\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.356113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-combined-ca-bundle\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.356183 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-httpd-config\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.356245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-ovndb-tls-certs\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.356265 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-config\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.360246 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-ovndb-tls-certs\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.360922 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-httpd-config\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.361862 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-combined-ca-bundle\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.362239 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-config\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.373556 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt62x\" (UniqueName: \"kubernetes.io/projected/7b120d24-1a2c-4c38-a2c4-ab6890064be4-kube-api-access-pt62x\") pod \"neutron-9bdc48756-ghfb4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.397454 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.752183 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87d5d4c4c-cl4tc"] Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.852809 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" event={"ID":"27d27214-48f5-454b-9d3f-dce07dea05bd","Type":"ContainerStarted","Data":"0c88b833cebc043c743ccbd0673b6440bff7a3f04712dfc80aa7d8f84836e6bb"} Oct 08 08:08:30 crc kubenswrapper[4958]: I1008 08:08:30.989120 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9bdc48756-ghfb4"] Oct 08 08:08:31 crc kubenswrapper[4958]: W1008 08:08:31.039628 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b120d24_1a2c_4c38_a2c4_ab6890064be4.slice/crio-e85a64c096b1ff53872cc515793e67ea29f144325cbdfa3dd38e4f537f6b776c WatchSource:0}: Error finding container e85a64c096b1ff53872cc515793e67ea29f144325cbdfa3dd38e4f537f6b776c: Status 404 returned error can't find the container with id e85a64c096b1ff53872cc515793e67ea29f144325cbdfa3dd38e4f537f6b776c Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.861252 4958 generic.go:334] "Generic (PLEG): container finished" podID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerID="53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7" exitCode=0 Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.861464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" event={"ID":"27d27214-48f5-454b-9d3f-dce07dea05bd","Type":"ContainerDied","Data":"53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7"} Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.866068 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9bdc48756-ghfb4" event={"ID":"7b120d24-1a2c-4c38-a2c4-ab6890064be4","Type":"ContainerStarted","Data":"4ef70dcb43b2c2306838de3ee77972b8a03b19e9aa117741e8289cfff4009702"} Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.866215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9bdc48756-ghfb4" event={"ID":"7b120d24-1a2c-4c38-a2c4-ab6890064be4","Type":"ContainerStarted","Data":"544899fcaff4f93ff0c430ba6ca655bcf6efd07eecf91d430e50c70c564d316e"} Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.866286 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9bdc48756-ghfb4" event={"ID":"7b120d24-1a2c-4c38-a2c4-ab6890064be4","Type":"ContainerStarted","Data":"e85a64c096b1ff53872cc515793e67ea29f144325cbdfa3dd38e4f537f6b776c"} Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.866532 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:08:31 crc kubenswrapper[4958]: I1008 08:08:31.901174 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9bdc48756-ghfb4" podStartSLOduration=1.9011572399999999 podStartE2EDuration="1.90115724s" podCreationTimestamp="2025-10-08 08:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:08:31.896056902 +0000 UTC m=+5655.025749513" watchObservedRunningTime="2025-10-08 08:08:31.90115724 +0000 UTC m=+5655.030849841" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.478679 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-695f697977-q27xm"] Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.480663 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.483335 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.483694 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.518452 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-config\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.518827 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-public-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.518923 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-httpd-config\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.519014 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9dm\" (UniqueName: \"kubernetes.io/projected/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-kube-api-access-7g9dm\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.519101 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-combined-ca-bundle\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.519196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-ovndb-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.519291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-internal-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.543540 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-695f697977-q27xm"] Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621141 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-config\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621230 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-public-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621265 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-httpd-config\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621283 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g9dm\" (UniqueName: \"kubernetes.io/projected/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-kube-api-access-7g9dm\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-combined-ca-bundle\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621340 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-ovndb-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.621375 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-internal-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.626131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-ovndb-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.626322 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-internal-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.626343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-public-tls-certs\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.626213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-httpd-config\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.627638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-combined-ca-bundle\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.628182 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-config\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.637167 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g9dm\" (UniqueName: \"kubernetes.io/projected/8a5c8c02-dcde-4254-8285-2e88e7ba9e6b-kube-api-access-7g9dm\") pod \"neutron-695f697977-q27xm\" (UID: \"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b\") " pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.841033 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.877191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" event={"ID":"27d27214-48f5-454b-9d3f-dce07dea05bd","Type":"ContainerStarted","Data":"f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d"} Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.877786 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:32 crc kubenswrapper[4958]: I1008 08:08:32.894896 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" podStartSLOduration=3.894873758 podStartE2EDuration="3.894873758s" podCreationTimestamp="2025-10-08 08:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:08:32.891097195 +0000 UTC m=+5656.020789816" watchObservedRunningTime="2025-10-08 08:08:32.894873758 +0000 UTC m=+5656.024566379" Oct 08 08:08:33 crc kubenswrapper[4958]: I1008 08:08:33.378684 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-695f697977-q27xm"] Oct 08 08:08:33 crc kubenswrapper[4958]: I1008 08:08:33.888701 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695f697977-q27xm" event={"ID":"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b","Type":"ContainerStarted","Data":"8c5e2c27e16fb47f02053be30509ccb6f9d4c715463cd9847b73d9ce6de2779e"} Oct 08 08:08:33 crc kubenswrapper[4958]: I1008 08:08:33.889120 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695f697977-q27xm" event={"ID":"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b","Type":"ContainerStarted","Data":"99db622f224617e6ef6009abc1dd190c4680454e55ba8834b49303b5adb49698"} Oct 08 08:08:33 crc kubenswrapper[4958]: I1008 08:08:33.889139 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-695f697977-q27xm" event={"ID":"8a5c8c02-dcde-4254-8285-2e88e7ba9e6b","Type":"ContainerStarted","Data":"fe3edb8dd094bb08cecefdde6f7364245f7c9aa051d83eaf91fb96b42bd76c07"} Oct 08 08:08:33 crc kubenswrapper[4958]: I1008 08:08:33.914409 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-695f697977-q27xm" podStartSLOduration=1.9143881139999999 podStartE2EDuration="1.914388114s" podCreationTimestamp="2025-10-08 08:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:08:33.909700667 +0000 UTC m=+5657.039393268" watchObservedRunningTime="2025-10-08 08:08:33.914388114 +0000 UTC m=+5657.044080715" Oct 08 08:08:34 crc kubenswrapper[4958]: I1008 08:08:34.899647 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-695f697977-q27xm" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.316279 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.420509 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86656fcf55-b7clc"] Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.420855 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerName="dnsmasq-dns" containerID="cri-o://373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9" gracePeriod=10 Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.897424 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.952205 4958 generic.go:334] "Generic (PLEG): container finished" podID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerID="373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9" exitCode=0 Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.952245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" event={"ID":"e1e2cbaf-2358-4004-a8d1-69843a18c2ec","Type":"ContainerDied","Data":"373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9"} Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.952271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" event={"ID":"e1e2cbaf-2358-4004-a8d1-69843a18c2ec","Type":"ContainerDied","Data":"160442ef7ac1b0e089e74cd16500141f8c2267790fa3f2eccce54517773143d9"} Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.952278 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86656fcf55-b7clc" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.952286 4958 scope.go:117] "RemoveContainer" containerID="373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.967973 4958 scope.go:117] "RemoveContainer" containerID="22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.979687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-nb\") pod \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.979757 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-config\") pod \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.979853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-sb\") pod \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.979892 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z77nw\" (UniqueName: \"kubernetes.io/projected/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-kube-api-access-z77nw\") pod \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.979926 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-dns-svc\") pod \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\" (UID: \"e1e2cbaf-2358-4004-a8d1-69843a18c2ec\") " Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.985195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-kube-api-access-z77nw" (OuterVolumeSpecName: "kube-api-access-z77nw") pod "e1e2cbaf-2358-4004-a8d1-69843a18c2ec" (UID: "e1e2cbaf-2358-4004-a8d1-69843a18c2ec"). InnerVolumeSpecName "kube-api-access-z77nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.986487 4958 scope.go:117] "RemoveContainer" containerID="373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9" Oct 08 08:08:40 crc kubenswrapper[4958]: E1008 08:08:40.986807 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9\": container with ID starting with 373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9 not found: ID does not exist" containerID="373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.986840 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9"} err="failed to get container status \"373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9\": rpc error: code = NotFound desc = could not find container \"373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9\": container with ID starting with 373656d0afcb581836e50525e4c65f22c67c40b9d2be8f98b9920e6c81f353b9 not found: ID does not exist" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.986862 4958 scope.go:117] "RemoveContainer" containerID="22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da" Oct 08 08:08:40 crc kubenswrapper[4958]: E1008 08:08:40.987090 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da\": container with ID starting with 22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da not found: ID does not exist" containerID="22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da" Oct 08 08:08:40 crc kubenswrapper[4958]: I1008 08:08:40.987129 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da"} err="failed to get container status \"22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da\": rpc error: code = NotFound desc = could not find container \"22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da\": container with ID starting with 22cd048cfba0ccc551cd44215849490f5d23037d4322ad0549e03abc674e60da not found: ID does not exist" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.022220 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-config" (OuterVolumeSpecName: "config") pod "e1e2cbaf-2358-4004-a8d1-69843a18c2ec" (UID: "e1e2cbaf-2358-4004-a8d1-69843a18c2ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.025636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1e2cbaf-2358-4004-a8d1-69843a18c2ec" (UID: "e1e2cbaf-2358-4004-a8d1-69843a18c2ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.025764 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1e2cbaf-2358-4004-a8d1-69843a18c2ec" (UID: "e1e2cbaf-2358-4004-a8d1-69843a18c2ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.027516 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1e2cbaf-2358-4004-a8d1-69843a18c2ec" (UID: "e1e2cbaf-2358-4004-a8d1-69843a18c2ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.087665 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.087705 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z77nw\" (UniqueName: \"kubernetes.io/projected/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-kube-api-access-z77nw\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.087718 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.087728 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.087739 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1e2cbaf-2358-4004-a8d1-69843a18c2ec-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.286676 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86656fcf55-b7clc"] Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.293494 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86656fcf55-b7clc"] Oct 08 08:08:41 crc kubenswrapper[4958]: I1008 08:08:41.589688 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" path="/var/lib/kubelet/pods/e1e2cbaf-2358-4004-a8d1-69843a18c2ec/volumes" Oct 08 08:09:00 crc kubenswrapper[4958]: I1008 08:09:00.409168 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:09:02 crc kubenswrapper[4958]: I1008 08:09:02.868654 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-695f697977-q27xm" Oct 08 08:09:02 crc kubenswrapper[4958]: I1008 08:09:02.979224 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9bdc48756-ghfb4"] Oct 08 08:09:02 crc kubenswrapper[4958]: I1008 08:09:02.979493 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9bdc48756-ghfb4" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-api" containerID="cri-o://544899fcaff4f93ff0c430ba6ca655bcf6efd07eecf91d430e50c70c564d316e" gracePeriod=30 Oct 08 08:09:02 crc kubenswrapper[4958]: I1008 08:09:02.979911 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9bdc48756-ghfb4" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-httpd" containerID="cri-o://4ef70dcb43b2c2306838de3ee77972b8a03b19e9aa117741e8289cfff4009702" gracePeriod=30 Oct 08 08:09:04 crc kubenswrapper[4958]: I1008 08:09:04.211876 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerID="4ef70dcb43b2c2306838de3ee77972b8a03b19e9aa117741e8289cfff4009702" exitCode=0 Oct 08 08:09:04 crc kubenswrapper[4958]: I1008 08:09:04.211972 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9bdc48756-ghfb4" event={"ID":"7b120d24-1a2c-4c38-a2c4-ab6890064be4","Type":"ContainerDied","Data":"4ef70dcb43b2c2306838de3ee77972b8a03b19e9aa117741e8289cfff4009702"} Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.246615 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerID="544899fcaff4f93ff0c430ba6ca655bcf6efd07eecf91d430e50c70c564d316e" exitCode=0 Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.247016 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9bdc48756-ghfb4" event={"ID":"7b120d24-1a2c-4c38-a2c4-ab6890064be4","Type":"ContainerDied","Data":"544899fcaff4f93ff0c430ba6ca655bcf6efd07eecf91d430e50c70c564d316e"} Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.522560 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.591992 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-httpd-config\") pod \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.592418 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-config\") pod \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.592555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt62x\" (UniqueName: \"kubernetes.io/projected/7b120d24-1a2c-4c38-a2c4-ab6890064be4-kube-api-access-pt62x\") pod \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.592585 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-ovndb-tls-certs\") pod \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.592626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-combined-ca-bundle\") pod \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\" (UID: \"7b120d24-1a2c-4c38-a2c4-ab6890064be4\") " Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.606584 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7b120d24-1a2c-4c38-a2c4-ab6890064be4" (UID: "7b120d24-1a2c-4c38-a2c4-ab6890064be4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.624934 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b120d24-1a2c-4c38-a2c4-ab6890064be4-kube-api-access-pt62x" (OuterVolumeSpecName: "kube-api-access-pt62x") pod "7b120d24-1a2c-4c38-a2c4-ab6890064be4" (UID: "7b120d24-1a2c-4c38-a2c4-ab6890064be4"). InnerVolumeSpecName "kube-api-access-pt62x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.655301 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-config" (OuterVolumeSpecName: "config") pod "7b120d24-1a2c-4c38-a2c4-ab6890064be4" (UID: "7b120d24-1a2c-4c38-a2c4-ab6890064be4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.684338 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b120d24-1a2c-4c38-a2c4-ab6890064be4" (UID: "7b120d24-1a2c-4c38-a2c4-ab6890064be4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.695098 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt62x\" (UniqueName: \"kubernetes.io/projected/7b120d24-1a2c-4c38-a2c4-ab6890064be4-kube-api-access-pt62x\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.695124 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.695133 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.695141 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.702596 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7b120d24-1a2c-4c38-a2c4-ab6890064be4" (UID: "7b120d24-1a2c-4c38-a2c4-ab6890064be4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:06 crc kubenswrapper[4958]: I1008 08:09:06.796764 4958 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b120d24-1a2c-4c38-a2c4-ab6890064be4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.292445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9bdc48756-ghfb4" event={"ID":"7b120d24-1a2c-4c38-a2c4-ab6890064be4","Type":"ContainerDied","Data":"e85a64c096b1ff53872cc515793e67ea29f144325cbdfa3dd38e4f537f6b776c"} Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.292515 4958 scope.go:117] "RemoveContainer" containerID="4ef70dcb43b2c2306838de3ee77972b8a03b19e9aa117741e8289cfff4009702" Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.292530 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9bdc48756-ghfb4" Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.331915 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9bdc48756-ghfb4"] Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.333290 4958 scope.go:117] "RemoveContainer" containerID="544899fcaff4f93ff0c430ba6ca655bcf6efd07eecf91d430e50c70c564d316e" Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.344934 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9bdc48756-ghfb4"] Oct 08 08:09:07 crc kubenswrapper[4958]: I1008 08:09:07.591789 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" path="/var/lib/kubelet/pods/7b120d24-1a2c-4c38-a2c4-ab6890064be4/volumes" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.498861 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rlb4v"] Oct 08 08:09:28 crc kubenswrapper[4958]: E1008 08:09:28.499874 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-api" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.499894 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-api" Oct 08 08:09:28 crc kubenswrapper[4958]: E1008 08:09:28.499917 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-httpd" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.499924 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-httpd" Oct 08 08:09:28 crc kubenswrapper[4958]: E1008 08:09:28.499940 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerName="init" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.499970 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerName="init" Oct 08 08:09:28 crc kubenswrapper[4958]: E1008 08:09:28.499994 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerName="dnsmasq-dns" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.500002 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerName="dnsmasq-dns" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.500182 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-api" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.500207 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e2cbaf-2358-4004-a8d1-69843a18c2ec" containerName="dnsmasq-dns" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.500222 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b120d24-1a2c-4c38-a2c4-ab6890064be4" containerName="neutron-httpd" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.500875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.506078 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.506078 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.506380 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kfq8j" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.506637 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.506770 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.514744 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rlb4v"] Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.539068 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zz6vr"] Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.540162 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.580040 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rlb4v"] Oct 08 08:09:28 crc kubenswrapper[4958]: E1008 08:09:28.584940 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-47b4g ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-47b4g ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-rlb4v" podUID="521a9eb5-7575-4203-bd34-9e6091319703" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.594779 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zz6vr"] Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.637379 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f6bcf6c4c-jb9md"] Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.645291 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.658868 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-ring-data-devices\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.658929 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-ring-data-devices\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47b4g\" (UniqueName: \"kubernetes.io/projected/521a9eb5-7575-4203-bd34-9e6091319703-kube-api-access-47b4g\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659090 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-dispersionconf\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659109 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqldb\" (UniqueName: \"kubernetes.io/projected/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-kube-api-access-zqldb\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659144 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-scripts\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659234 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-etc-swift\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659253 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-dispersionconf\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521a9eb5-7575-4203-bd34-9e6091319703-etc-swift\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659791 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-combined-ca-bundle\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659841 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-scripts\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.659975 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-swiftconf\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.660064 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-swiftconf\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.660091 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-combined-ca-bundle\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.661223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bcf6c4c-jb9md"] Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.761876 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.761931 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-etc-swift\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.761987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-dispersionconf\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521a9eb5-7575-4203-bd34-9e6091319703-etc-swift\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762059 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-combined-ca-bundle\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-scripts\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762099 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-swiftconf\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762127 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-swiftconf\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762152 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-combined-ca-bundle\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762183 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvqf\" (UniqueName: \"kubernetes.io/projected/55533c86-b6a5-4edb-a85f-c370730e8c4c-kube-api-access-bnvqf\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-ring-data-devices\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762259 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-ring-data-devices\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-config\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762308 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47b4g\" (UniqueName: \"kubernetes.io/projected/521a9eb5-7575-4203-bd34-9e6091319703-kube-api-access-47b4g\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-dns-svc\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762369 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-dispersionconf\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqldb\" (UniqueName: \"kubernetes.io/projected/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-kube-api-access-zqldb\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762426 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-etc-swift\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762462 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-scripts\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762525 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521a9eb5-7575-4203-bd34-9e6091319703-etc-swift\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.762813 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-scripts\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.763274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-ring-data-devices\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.763416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-ring-data-devices\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.763540 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-scripts\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.769212 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-combined-ca-bundle\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.769525 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-swiftconf\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.769539 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-dispersionconf\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.772578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-swiftconf\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.772579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-dispersionconf\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.787859 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47b4g\" (UniqueName: \"kubernetes.io/projected/521a9eb5-7575-4203-bd34-9e6091319703-kube-api-access-47b4g\") pod \"swift-ring-rebalance-rlb4v\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.793034 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-combined-ca-bundle\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.800345 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqldb\" (UniqueName: \"kubernetes.io/projected/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-kube-api-access-zqldb\") pod \"swift-ring-rebalance-zz6vr\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.864335 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-config\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.864394 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-dns-svc\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.864411 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.864466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.864533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnvqf\" (UniqueName: \"kubernetes.io/projected/55533c86-b6a5-4edb-a85f-c370730e8c4c-kube-api-access-bnvqf\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.865425 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.865475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-config\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.865699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.866038 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-dns-svc\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.888690 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnvqf\" (UniqueName: \"kubernetes.io/projected/55533c86-b6a5-4edb-a85f-c370730e8c4c-kube-api-access-bnvqf\") pod \"dnsmasq-dns-f6bcf6c4c-jb9md\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.893299 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:28 crc kubenswrapper[4958]: I1008 08:09:28.965647 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.379523 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zz6vr"] Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.476518 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f6bcf6c4c-jb9md"] Oct 08 08:09:29 crc kubenswrapper[4958]: W1008 08:09:29.481025 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55533c86_b6a5_4edb_a85f_c370730e8c4c.slice/crio-42aa5bcc7d7e669b7a22c2b11e774e16394dfbffcc55708c78e0d33d88c57d91 WatchSource:0}: Error finding container 42aa5bcc7d7e669b7a22c2b11e774e16394dfbffcc55708c78e0d33d88c57d91: Status 404 returned error can't find the container with id 42aa5bcc7d7e669b7a22c2b11e774e16394dfbffcc55708c78e0d33d88c57d91 Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.556890 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" event={"ID":"55533c86-b6a5-4edb-a85f-c370730e8c4c","Type":"ContainerStarted","Data":"42aa5bcc7d7e669b7a22c2b11e774e16394dfbffcc55708c78e0d33d88c57d91"} Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.560413 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz6vr" event={"ID":"2f483059-1c2c-445a-ad16-0ba2c7c6abc4","Type":"ContainerStarted","Data":"bf91af2392b101c6c821bc90752ed2fadfaf7354aae87833a753a3200ac2a3d4"} Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.560438 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.577425 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.675642 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-ring-data-devices\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.675699 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521a9eb5-7575-4203-bd34-9e6091319703-etc-swift\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.675776 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-scripts\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.675834 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-combined-ca-bundle\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.675862 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-dispersionconf\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676028 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47b4g\" (UniqueName: \"kubernetes.io/projected/521a9eb5-7575-4203-bd34-9e6091319703-kube-api-access-47b4g\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676112 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-swiftconf\") pod \"521a9eb5-7575-4203-bd34-9e6091319703\" (UID: \"521a9eb5-7575-4203-bd34-9e6091319703\") " Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676200 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676429 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/521a9eb5-7575-4203-bd34-9e6091319703-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676766 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/521a9eb5-7575-4203-bd34-9e6091319703-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676794 4958 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.676933 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-scripts" (OuterVolumeSpecName: "scripts") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.680980 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.682152 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521a9eb5-7575-4203-bd34-9e6091319703-kube-api-access-47b4g" (OuterVolumeSpecName: "kube-api-access-47b4g") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "kube-api-access-47b4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.683295 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.685322 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "521a9eb5-7575-4203-bd34-9e6091319703" (UID: "521a9eb5-7575-4203-bd34-9e6091319703"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.778498 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47b4g\" (UniqueName: \"kubernetes.io/projected/521a9eb5-7575-4203-bd34-9e6091319703-kube-api-access-47b4g\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.779104 4958 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.779173 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/521a9eb5-7575-4203-bd34-9e6091319703-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.779235 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:29 crc kubenswrapper[4958]: I1008 08:09:29.779299 4958 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/521a9eb5-7575-4203-bd34-9e6091319703-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.481349 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-57f8789489-59kr8"] Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.483767 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.486224 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.501592 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57f8789489-59kr8"] Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.573375 4958 generic.go:334] "Generic (PLEG): container finished" podID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerID="880f10414aad22e70e942a128a84957d78afa2889a6719d73010a4edbca2f17b" exitCode=0 Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.573505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" event={"ID":"55533c86-b6a5-4edb-a85f-c370730e8c4c","Type":"ContainerDied","Data":"880f10414aad22e70e942a128a84957d78afa2889a6719d73010a4edbca2f17b"} Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.578849 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rlb4v" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.579214 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz6vr" event={"ID":"2f483059-1c2c-445a-ad16-0ba2c7c6abc4","Type":"ContainerStarted","Data":"d771a19566bedfcecbb899d3256f2195f14e99b00049015c9a8ee712d64cb723"} Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.591459 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dc2c\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-kube-api-access-2dc2c\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.591729 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-combined-ca-bundle\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.591841 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-log-httpd\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.592089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-run-httpd\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.592190 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-etc-swift\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.592281 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-config-data\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.626238 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zz6vr" podStartSLOduration=2.626222624 podStartE2EDuration="2.626222624s" podCreationTimestamp="2025-10-08 08:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:09:30.624194199 +0000 UTC m=+5713.753886790" watchObservedRunningTime="2025-10-08 08:09:30.626222624 +0000 UTC m=+5713.755915225" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.693969 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-etc-swift\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.694091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-config-data\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.694214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dc2c\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-kube-api-access-2dc2c\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.694355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-combined-ca-bundle\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.694401 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-log-httpd\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.694541 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-run-httpd\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.694992 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-run-httpd\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.696744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-log-httpd\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.699395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-combined-ca-bundle\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.699729 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-config-data\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.700216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-etc-swift\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.715363 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dc2c\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-kube-api-access-2dc2c\") pod \"swift-proxy-57f8789489-59kr8\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.785224 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-rlb4v"] Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.790821 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-rlb4v"] Oct 08 08:09:30 crc kubenswrapper[4958]: I1008 08:09:30.815351 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.339525 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-57f8789489-59kr8"] Oct 08 08:09:31 crc kubenswrapper[4958]: W1008 08:09:31.342826 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70b4ba1e_a38e_421a_aa06_bf03b692df11.slice/crio-3d91b93f7badc47346fe48a4bb487c4638bbb4e373c935eaf2af8071884afd3e WatchSource:0}: Error finding container 3d91b93f7badc47346fe48a4bb487c4638bbb4e373c935eaf2af8071884afd3e: Status 404 returned error can't find the container with id 3d91b93f7badc47346fe48a4bb487c4638bbb4e373c935eaf2af8071884afd3e Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.587327 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521a9eb5-7575-4203-bd34-9e6091319703" path="/var/lib/kubelet/pods/521a9eb5-7575-4203-bd34-9e6091319703/volumes" Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.592223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" event={"ID":"55533c86-b6a5-4edb-a85f-c370730e8c4c","Type":"ContainerStarted","Data":"2ddded6f892dd3aa1824daedf3993f373a052152864a0830ad2deadbab944233"} Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.592361 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.593967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f8789489-59kr8" event={"ID":"70b4ba1e-a38e-421a-aa06-bf03b692df11","Type":"ContainerStarted","Data":"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761"} Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.593996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f8789489-59kr8" event={"ID":"70b4ba1e-a38e-421a-aa06-bf03b692df11","Type":"ContainerStarted","Data":"3d91b93f7badc47346fe48a4bb487c4638bbb4e373c935eaf2af8071884afd3e"} Oct 08 08:09:31 crc kubenswrapper[4958]: I1008 08:09:31.616607 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" podStartSLOduration=3.61658644 podStartE2EDuration="3.61658644s" podCreationTimestamp="2025-10-08 08:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:09:31.611197424 +0000 UTC m=+5714.740890045" watchObservedRunningTime="2025-10-08 08:09:31.61658644 +0000 UTC m=+5714.746279042" Oct 08 08:09:32 crc kubenswrapper[4958]: I1008 08:09:32.603914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f8789489-59kr8" event={"ID":"70b4ba1e-a38e-421a-aa06-bf03b692df11","Type":"ContainerStarted","Data":"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe"} Oct 08 08:09:32 crc kubenswrapper[4958]: I1008 08:09:32.633637 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-57f8789489-59kr8" podStartSLOduration=2.633618769 podStartE2EDuration="2.633618769s" podCreationTimestamp="2025-10-08 08:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:09:32.627549494 +0000 UTC m=+5715.757242105" watchObservedRunningTime="2025-10-08 08:09:32.633618769 +0000 UTC m=+5715.763311370" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.122503 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6cc64ff97c-4bpv7"] Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.124652 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.126861 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.135416 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.146226 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6cc64ff97c-4bpv7"] Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.266339 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-internal-tls-certs\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.266570 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-etc-swift\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.266707 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g45t\" (UniqueName: \"kubernetes.io/projected/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-kube-api-access-5g45t\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.266788 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-log-httpd\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.266888 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-public-tls-certs\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.266988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-config-data\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.267140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-combined-ca-bundle\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.267210 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-run-httpd\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368653 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-public-tls-certs\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368695 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-config-data\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368793 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-combined-ca-bundle\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368810 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-run-httpd\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368832 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-internal-tls-certs\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-etc-swift\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368890 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g45t\" (UniqueName: \"kubernetes.io/projected/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-kube-api-access-5g45t\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.368910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-log-httpd\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.369414 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-log-httpd\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.372446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-run-httpd\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.376639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-public-tls-certs\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.377335 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-combined-ca-bundle\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.377852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-etc-swift\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.386700 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-internal-tls-certs\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.387579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-config-data\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.400756 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g45t\" (UniqueName: \"kubernetes.io/projected/1522b4e2-e26c-43ce-ab2c-eb31043b4da7-kube-api-access-5g45t\") pod \"swift-proxy-6cc64ff97c-4bpv7\" (UID: \"1522b4e2-e26c-43ce-ab2c-eb31043b4da7\") " pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.456990 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.616612 4958 generic.go:334] "Generic (PLEG): container finished" podID="2f483059-1c2c-445a-ad16-0ba2c7c6abc4" containerID="d771a19566bedfcecbb899d3256f2195f14e99b00049015c9a8ee712d64cb723" exitCode=0 Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.616699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz6vr" event={"ID":"2f483059-1c2c-445a-ad16-0ba2c7c6abc4","Type":"ContainerDied","Data":"d771a19566bedfcecbb899d3256f2195f14e99b00049015c9a8ee712d64cb723"} Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.617161 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:33 crc kubenswrapper[4958]: I1008 08:09:33.617199 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:34 crc kubenswrapper[4958]: I1008 08:09:34.008590 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6cc64ff97c-4bpv7"] Oct 08 08:09:34 crc kubenswrapper[4958]: W1008 08:09:34.015188 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1522b4e2_e26c_43ce_ab2c_eb31043b4da7.slice/crio-5fea8922a909ed68dd98217c2203171ec7e0c63244c50fc32996dd904659987a WatchSource:0}: Error finding container 5fea8922a909ed68dd98217c2203171ec7e0c63244c50fc32996dd904659987a: Status 404 returned error can't find the container with id 5fea8922a909ed68dd98217c2203171ec7e0c63244c50fc32996dd904659987a Oct 08 08:09:34 crc kubenswrapper[4958]: I1008 08:09:34.631997 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" event={"ID":"1522b4e2-e26c-43ce-ab2c-eb31043b4da7","Type":"ContainerStarted","Data":"c28fe260f6a404f52bb8f96f789dc8b26f9925fca8bd7b444065f396bc7df667"} Oct 08 08:09:34 crc kubenswrapper[4958]: I1008 08:09:34.632387 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" event={"ID":"1522b4e2-e26c-43ce-ab2c-eb31043b4da7","Type":"ContainerStarted","Data":"6e09064e87d10d34e555d93d4086027eb7618fccb9a8959786b541c3de574684"} Oct 08 08:09:34 crc kubenswrapper[4958]: I1008 08:09:34.632401 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" event={"ID":"1522b4e2-e26c-43ce-ab2c-eb31043b4da7","Type":"ContainerStarted","Data":"5fea8922a909ed68dd98217c2203171ec7e0c63244c50fc32996dd904659987a"} Oct 08 08:09:34 crc kubenswrapper[4958]: I1008 08:09:34.665099 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" podStartSLOduration=1.665075307 podStartE2EDuration="1.665075307s" podCreationTimestamp="2025-10-08 08:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:09:34.663775572 +0000 UTC m=+5717.793468183" watchObservedRunningTime="2025-10-08 08:09:34.665075307 +0000 UTC m=+5717.794767918" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.009643 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-scripts\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100605 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqldb\" (UniqueName: \"kubernetes.io/projected/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-kube-api-access-zqldb\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100659 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-swiftconf\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-dispersionconf\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100821 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-ring-data-devices\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100873 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-etc-swift\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.100898 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-combined-ca-bundle\") pod \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\" (UID: \"2f483059-1c2c-445a-ad16-0ba2c7c6abc4\") " Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.101884 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.102098 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.118290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.118490 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-kube-api-access-zqldb" (OuterVolumeSpecName: "kube-api-access-zqldb") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "kube-api-access-zqldb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.129580 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-scripts" (OuterVolumeSpecName: "scripts") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.140486 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.144776 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f483059-1c2c-445a-ad16-0ba2c7c6abc4" (UID: "2f483059-1c2c-445a-ad16-0ba2c7c6abc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202838 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202875 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqldb\" (UniqueName: \"kubernetes.io/projected/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-kube-api-access-zqldb\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202891 4958 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202905 4958 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202918 4958 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202930 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.202943 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f483059-1c2c-445a-ad16-0ba2c7c6abc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.641247 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zz6vr" event={"ID":"2f483059-1c2c-445a-ad16-0ba2c7c6abc4","Type":"ContainerDied","Data":"bf91af2392b101c6c821bc90752ed2fadfaf7354aae87833a753a3200ac2a3d4"} Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.641285 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zz6vr" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.641308 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf91af2392b101c6c821bc90752ed2fadfaf7354aae87833a753a3200ac2a3d4" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.641425 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:35 crc kubenswrapper[4958]: I1008 08:09:35.641452 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:38 crc kubenswrapper[4958]: I1008 08:09:38.890246 4958 scope.go:117] "RemoveContainer" containerID="96e8ef716080c678f2d4de0df920363c6271e43e7c96c7ed6563e5966bc2cd68" Oct 08 08:09:38 crc kubenswrapper[4958]: I1008 08:09:38.924335 4958 scope.go:117] "RemoveContainer" containerID="7b4385c1c5360b02cf9e4426bb24a407bc7efebb04f0c8630b2b0b076bb3c14c" Oct 08 08:09:38 crc kubenswrapper[4958]: I1008 08:09:38.967987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.062755 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87d5d4c4c-cl4tc"] Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.063278 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerName="dnsmasq-dns" containerID="cri-o://f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d" gracePeriod=10 Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.568977 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.688396 4958 generic.go:334] "Generic (PLEG): container finished" podID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerID="f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d" exitCode=0 Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.688441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" event={"ID":"27d27214-48f5-454b-9d3f-dce07dea05bd","Type":"ContainerDied","Data":"f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d"} Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.688469 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" event={"ID":"27d27214-48f5-454b-9d3f-dce07dea05bd","Type":"ContainerDied","Data":"0c88b833cebc043c743ccbd0673b6440bff7a3f04712dfc80aa7d8f84836e6bb"} Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.688480 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87d5d4c4c-cl4tc" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.688491 4958 scope.go:117] "RemoveContainer" containerID="f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.698255 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-nb\") pod \"27d27214-48f5-454b-9d3f-dce07dea05bd\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.698423 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjm85\" (UniqueName: \"kubernetes.io/projected/27d27214-48f5-454b-9d3f-dce07dea05bd-kube-api-access-cjm85\") pod \"27d27214-48f5-454b-9d3f-dce07dea05bd\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.698456 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-dns-svc\") pod \"27d27214-48f5-454b-9d3f-dce07dea05bd\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.698521 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-config\") pod \"27d27214-48f5-454b-9d3f-dce07dea05bd\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.698614 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-sb\") pod \"27d27214-48f5-454b-9d3f-dce07dea05bd\" (UID: \"27d27214-48f5-454b-9d3f-dce07dea05bd\") " Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.706377 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d27214-48f5-454b-9d3f-dce07dea05bd-kube-api-access-cjm85" (OuterVolumeSpecName: "kube-api-access-cjm85") pod "27d27214-48f5-454b-9d3f-dce07dea05bd" (UID: "27d27214-48f5-454b-9d3f-dce07dea05bd"). InnerVolumeSpecName "kube-api-access-cjm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.709886 4958 scope.go:117] "RemoveContainer" containerID="53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.751258 4958 scope.go:117] "RemoveContainer" containerID="f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d" Oct 08 08:09:39 crc kubenswrapper[4958]: E1008 08:09:39.751975 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d\": container with ID starting with f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d not found: ID does not exist" containerID="f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.752034 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d"} err="failed to get container status \"f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d\": rpc error: code = NotFound desc = could not find container \"f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d\": container with ID starting with f7cba77ec8fc2f7e1650a641dc53fa7f66191c99acd5c9a2b275382686fb379d not found: ID does not exist" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.752295 4958 scope.go:117] "RemoveContainer" containerID="53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7" Oct 08 08:09:39 crc kubenswrapper[4958]: E1008 08:09:39.752614 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7\": container with ID starting with 53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7 not found: ID does not exist" containerID="53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.752708 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7"} err="failed to get container status \"53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7\": rpc error: code = NotFound desc = could not find container \"53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7\": container with ID starting with 53886495ad0cb91781d085a39cf13c214c0bcf99c8b401bd8a8f147cc7b94df7 not found: ID does not exist" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.761443 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27d27214-48f5-454b-9d3f-dce07dea05bd" (UID: "27d27214-48f5-454b-9d3f-dce07dea05bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.779316 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-config" (OuterVolumeSpecName: "config") pod "27d27214-48f5-454b-9d3f-dce07dea05bd" (UID: "27d27214-48f5-454b-9d3f-dce07dea05bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.780959 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27d27214-48f5-454b-9d3f-dce07dea05bd" (UID: "27d27214-48f5-454b-9d3f-dce07dea05bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.785747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27d27214-48f5-454b-9d3f-dce07dea05bd" (UID: "27d27214-48f5-454b-9d3f-dce07dea05bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.800806 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.800838 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjm85\" (UniqueName: \"kubernetes.io/projected/27d27214-48f5-454b-9d3f-dce07dea05bd-kube-api-access-cjm85\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.800851 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.800860 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:39 crc kubenswrapper[4958]: I1008 08:09:39.800870 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d27214-48f5-454b-9d3f-dce07dea05bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:40 crc kubenswrapper[4958]: I1008 08:09:40.025999 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87d5d4c4c-cl4tc"] Oct 08 08:09:40 crc kubenswrapper[4958]: I1008 08:09:40.034676 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-87d5d4c4c-cl4tc"] Oct 08 08:09:40 crc kubenswrapper[4958]: I1008 08:09:40.817683 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:40 crc kubenswrapper[4958]: I1008 08:09:40.822396 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.224205 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65h2d"] Oct 08 08:09:41 crc kubenswrapper[4958]: E1008 08:09:41.224635 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerName="dnsmasq-dns" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.224654 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerName="dnsmasq-dns" Oct 08 08:09:41 crc kubenswrapper[4958]: E1008 08:09:41.224678 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerName="init" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.224685 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerName="init" Oct 08 08:09:41 crc kubenswrapper[4958]: E1008 08:09:41.224716 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f483059-1c2c-445a-ad16-0ba2c7c6abc4" containerName="swift-ring-rebalance" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.224724 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f483059-1c2c-445a-ad16-0ba2c7c6abc4" containerName="swift-ring-rebalance" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.224917 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f483059-1c2c-445a-ad16-0ba2c7c6abc4" containerName="swift-ring-rebalance" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.224935 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" containerName="dnsmasq-dns" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.227108 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.241546 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65h2d"] Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.329027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-utilities\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.329532 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-catalog-content\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.329576 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmd2r\" (UniqueName: \"kubernetes.io/projected/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-kube-api-access-mmd2r\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.431062 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-catalog-content\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.431111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmd2r\" (UniqueName: \"kubernetes.io/projected/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-kube-api-access-mmd2r\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.431169 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-utilities\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.431635 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-utilities\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.431791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-catalog-content\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.450045 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmd2r\" (UniqueName: \"kubernetes.io/projected/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-kube-api-access-mmd2r\") pod \"redhat-operators-65h2d\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.552546 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.589374 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d27214-48f5-454b-9d3f-dce07dea05bd" path="/var/lib/kubelet/pods/27d27214-48f5-454b-9d3f-dce07dea05bd/volumes" Oct 08 08:09:41 crc kubenswrapper[4958]: I1008 08:09:41.979353 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65h2d"] Oct 08 08:09:42 crc kubenswrapper[4958]: I1008 08:09:42.730852 4958 generic.go:334] "Generic (PLEG): container finished" podID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerID="29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85" exitCode=0 Oct 08 08:09:42 crc kubenswrapper[4958]: I1008 08:09:42.731026 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerDied","Data":"29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85"} Oct 08 08:09:42 crc kubenswrapper[4958]: I1008 08:09:42.732042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerStarted","Data":"59e4b918d58bdee602faa938be86ee70f614c43dd4692f5725894957fb394c27"} Oct 08 08:09:42 crc kubenswrapper[4958]: I1008 08:09:42.733108 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:09:43 crc kubenswrapper[4958]: I1008 08:09:43.485265 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:43 crc kubenswrapper[4958]: I1008 08:09:43.506348 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6cc64ff97c-4bpv7" Oct 08 08:09:43 crc kubenswrapper[4958]: I1008 08:09:43.613875 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-57f8789489-59kr8"] Oct 08 08:09:43 crc kubenswrapper[4958]: I1008 08:09:43.614101 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-57f8789489-59kr8" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-httpd" containerID="cri-o://5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761" gracePeriod=30 Oct 08 08:09:43 crc kubenswrapper[4958]: I1008 08:09:43.614547 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-57f8789489-59kr8" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-server" containerID="cri-o://eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe" gracePeriod=30 Oct 08 08:09:43 crc kubenswrapper[4958]: I1008 08:09:43.773637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerStarted","Data":"5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410"} Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.641298 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785021 4958 generic.go:334] "Generic (PLEG): container finished" podID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerID="eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe" exitCode=0 Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f8789489-59kr8" event={"ID":"70b4ba1e-a38e-421a-aa06-bf03b692df11","Type":"ContainerDied","Data":"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe"} Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785123 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-57f8789489-59kr8" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f8789489-59kr8" event={"ID":"70b4ba1e-a38e-421a-aa06-bf03b692df11","Type":"ContainerDied","Data":"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761"} Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785177 4958 scope.go:117] "RemoveContainer" containerID="eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785093 4958 generic.go:334] "Generic (PLEG): container finished" podID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerID="5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761" exitCode=0 Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.785314 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-57f8789489-59kr8" event={"ID":"70b4ba1e-a38e-421a-aa06-bf03b692df11","Type":"ContainerDied","Data":"3d91b93f7badc47346fe48a4bb487c4638bbb4e373c935eaf2af8071884afd3e"} Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.788650 4958 generic.go:334] "Generic (PLEG): container finished" podID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerID="5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410" exitCode=0 Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.788695 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerDied","Data":"5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410"} Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.797046 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-log-httpd\") pod \"70b4ba1e-a38e-421a-aa06-bf03b692df11\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.797232 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-combined-ca-bundle\") pod \"70b4ba1e-a38e-421a-aa06-bf03b692df11\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.797578 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-config-data\") pod \"70b4ba1e-a38e-421a-aa06-bf03b692df11\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.798118 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-etc-swift\") pod \"70b4ba1e-a38e-421a-aa06-bf03b692df11\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.798158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70b4ba1e-a38e-421a-aa06-bf03b692df11" (UID: "70b4ba1e-a38e-421a-aa06-bf03b692df11"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.798198 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dc2c\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-kube-api-access-2dc2c\") pod \"70b4ba1e-a38e-421a-aa06-bf03b692df11\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.798256 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-run-httpd\") pod \"70b4ba1e-a38e-421a-aa06-bf03b692df11\" (UID: \"70b4ba1e-a38e-421a-aa06-bf03b692df11\") " Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.802133 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.802401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70b4ba1e-a38e-421a-aa06-bf03b692df11" (UID: "70b4ba1e-a38e-421a-aa06-bf03b692df11"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.814216 4958 scope.go:117] "RemoveContainer" containerID="5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.837839 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-kube-api-access-2dc2c" (OuterVolumeSpecName: "kube-api-access-2dc2c") pod "70b4ba1e-a38e-421a-aa06-bf03b692df11" (UID: "70b4ba1e-a38e-421a-aa06-bf03b692df11"). InnerVolumeSpecName "kube-api-access-2dc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.838134 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "70b4ba1e-a38e-421a-aa06-bf03b692df11" (UID: "70b4ba1e-a38e-421a-aa06-bf03b692df11"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.870002 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-config-data" (OuterVolumeSpecName: "config-data") pod "70b4ba1e-a38e-421a-aa06-bf03b692df11" (UID: "70b4ba1e-a38e-421a-aa06-bf03b692df11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.872386 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70b4ba1e-a38e-421a-aa06-bf03b692df11" (UID: "70b4ba1e-a38e-421a-aa06-bf03b692df11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.904025 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.904053 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b4ba1e-a38e-421a-aa06-bf03b692df11-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.904199 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.904219 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dc2c\" (UniqueName: \"kubernetes.io/projected/70b4ba1e-a38e-421a-aa06-bf03b692df11-kube-api-access-2dc2c\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.904231 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70b4ba1e-a38e-421a-aa06-bf03b692df11-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.944602 4958 scope.go:117] "RemoveContainer" containerID="eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe" Oct 08 08:09:44 crc kubenswrapper[4958]: E1008 08:09:44.945253 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe\": container with ID starting with eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe not found: ID does not exist" containerID="eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.945291 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe"} err="failed to get container status \"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe\": rpc error: code = NotFound desc = could not find container \"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe\": container with ID starting with eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe not found: ID does not exist" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.945315 4958 scope.go:117] "RemoveContainer" containerID="5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761" Oct 08 08:09:44 crc kubenswrapper[4958]: E1008 08:09:44.946009 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761\": container with ID starting with 5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761 not found: ID does not exist" containerID="5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.946044 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761"} err="failed to get container status \"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761\": rpc error: code = NotFound desc = could not find container \"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761\": container with ID starting with 5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761 not found: ID does not exist" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.946063 4958 scope.go:117] "RemoveContainer" containerID="eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.948047 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe"} err="failed to get container status \"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe\": rpc error: code = NotFound desc = could not find container \"eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe\": container with ID starting with eaddeae99eb5e97607b32cb47988595cd3f589f2114852dc4612547811f520fe not found: ID does not exist" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.948082 4958 scope.go:117] "RemoveContainer" containerID="5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761" Oct 08 08:09:44 crc kubenswrapper[4958]: I1008 08:09:44.948484 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761"} err="failed to get container status \"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761\": rpc error: code = NotFound desc = could not find container \"5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761\": container with ID starting with 5d2a2277aeeb53775eebd10f2d91567ea860b065179e37c526c6b3ee7481f761 not found: ID does not exist" Oct 08 08:09:45 crc kubenswrapper[4958]: I1008 08:09:45.119972 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-57f8789489-59kr8"] Oct 08 08:09:45 crc kubenswrapper[4958]: I1008 08:09:45.126024 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-57f8789489-59kr8"] Oct 08 08:09:45 crc kubenswrapper[4958]: I1008 08:09:45.594303 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" path="/var/lib/kubelet/pods/70b4ba1e-a38e-421a-aa06-bf03b692df11/volumes" Oct 08 08:09:45 crc kubenswrapper[4958]: I1008 08:09:45.801493 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerStarted","Data":"0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab"} Oct 08 08:09:45 crc kubenswrapper[4958]: I1008 08:09:45.836307 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65h2d" podStartSLOduration=2.197365322 podStartE2EDuration="4.836287051s" podCreationTimestamp="2025-10-08 08:09:41 +0000 UTC" firstStartedPulling="2025-10-08 08:09:42.732845304 +0000 UTC m=+5725.862537895" lastFinishedPulling="2025-10-08 08:09:45.371767023 +0000 UTC m=+5728.501459624" observedRunningTime="2025-10-08 08:09:45.821986073 +0000 UTC m=+5728.951678714" watchObservedRunningTime="2025-10-08 08:09:45.836287051 +0000 UTC m=+5728.965979662" Oct 08 08:09:51 crc kubenswrapper[4958]: I1008 08:09:51.553475 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:51 crc kubenswrapper[4958]: I1008 08:09:51.556529 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:51 crc kubenswrapper[4958]: I1008 08:09:51.635274 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:51 crc kubenswrapper[4958]: I1008 08:09:51.949479 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:52 crc kubenswrapper[4958]: I1008 08:09:52.024887 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65h2d"] Oct 08 08:09:53 crc kubenswrapper[4958]: I1008 08:09:53.897443 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65h2d" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="registry-server" containerID="cri-o://0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab" gracePeriod=2 Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.674792 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.822559 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-catalog-content\") pod \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.822852 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-utilities\") pod \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.822988 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmd2r\" (UniqueName: \"kubernetes.io/projected/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-kube-api-access-mmd2r\") pod \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\" (UID: \"58afe4a3-4576-41ab-b798-62f8ebd2cf5b\") " Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.824442 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-utilities" (OuterVolumeSpecName: "utilities") pod "58afe4a3-4576-41ab-b798-62f8ebd2cf5b" (UID: "58afe4a3-4576-41ab-b798-62f8ebd2cf5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.830574 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-kube-api-access-mmd2r" (OuterVolumeSpecName: "kube-api-access-mmd2r") pod "58afe4a3-4576-41ab-b798-62f8ebd2cf5b" (UID: "58afe4a3-4576-41ab-b798-62f8ebd2cf5b"). InnerVolumeSpecName "kube-api-access-mmd2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.923318 4958 generic.go:334] "Generic (PLEG): container finished" podID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerID="0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab" exitCode=0 Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.923375 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerDied","Data":"0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab"} Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.923415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65h2d" event={"ID":"58afe4a3-4576-41ab-b798-62f8ebd2cf5b","Type":"ContainerDied","Data":"59e4b918d58bdee602faa938be86ee70f614c43dd4692f5725894957fb394c27"} Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.923444 4958 scope.go:117] "RemoveContainer" containerID="0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.923634 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65h2d" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.925580 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.925638 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmd2r\" (UniqueName: \"kubernetes.io/projected/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-kube-api-access-mmd2r\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.945256 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58afe4a3-4576-41ab-b798-62f8ebd2cf5b" (UID: "58afe4a3-4576-41ab-b798-62f8ebd2cf5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.951846 4958 scope.go:117] "RemoveContainer" containerID="5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410" Oct 08 08:09:55 crc kubenswrapper[4958]: I1008 08:09:55.984472 4958 scope.go:117] "RemoveContainer" containerID="29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.027435 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58afe4a3-4576-41ab-b798-62f8ebd2cf5b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.035434 4958 scope.go:117] "RemoveContainer" containerID="0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab" Oct 08 08:09:56 crc kubenswrapper[4958]: E1008 08:09:56.036123 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab\": container with ID starting with 0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab not found: ID does not exist" containerID="0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.036184 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab"} err="failed to get container status \"0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab\": rpc error: code = NotFound desc = could not find container \"0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab\": container with ID starting with 0464f323d0479a68b455b707b45cabf2e3295fc1e151bc1c9df74f1078ba49ab not found: ID does not exist" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.036227 4958 scope.go:117] "RemoveContainer" containerID="5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410" Oct 08 08:09:56 crc kubenswrapper[4958]: E1008 08:09:56.036764 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410\": container with ID starting with 5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410 not found: ID does not exist" containerID="5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.036800 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410"} err="failed to get container status \"5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410\": rpc error: code = NotFound desc = could not find container \"5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410\": container with ID starting with 5d0ab401ed4e04143744261440e20fb1df8af1f4707070acc5b55e6f9a9cd410 not found: ID does not exist" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.036855 4958 scope.go:117] "RemoveContainer" containerID="29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85" Oct 08 08:09:56 crc kubenswrapper[4958]: E1008 08:09:56.037270 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85\": container with ID starting with 29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85 not found: ID does not exist" containerID="29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.037309 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85"} err="failed to get container status \"29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85\": rpc error: code = NotFound desc = could not find container \"29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85\": container with ID starting with 29e4334c21c80b74b78b9f008f9861d865f18e3648e7478aa1c50040fdbf3a85 not found: ID does not exist" Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.287137 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65h2d"] Oct 08 08:09:56 crc kubenswrapper[4958]: I1008 08:09:56.301551 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65h2d"] Oct 08 08:09:57 crc kubenswrapper[4958]: I1008 08:09:57.595890 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" path="/var/lib/kubelet/pods/58afe4a3-4576-41ab-b798-62f8ebd2cf5b/volumes" Oct 08 08:10:06 crc kubenswrapper[4958]: I1008 08:10:06.844878 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:10:06 crc kubenswrapper[4958]: I1008 08:10:06.845380 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.067069 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2tvl7"] Oct 08 08:10:17 crc kubenswrapper[4958]: E1008 08:10:17.068044 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="registry-server" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068059 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="registry-server" Oct 08 08:10:17 crc kubenswrapper[4958]: E1008 08:10:17.068085 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-server" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068092 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-server" Oct 08 08:10:17 crc kubenswrapper[4958]: E1008 08:10:17.068105 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="extract-utilities" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068112 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="extract-utilities" Oct 08 08:10:17 crc kubenswrapper[4958]: E1008 08:10:17.068122 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="extract-content" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068127 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="extract-content" Oct 08 08:10:17 crc kubenswrapper[4958]: E1008 08:10:17.068147 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-httpd" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068152 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-httpd" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068311 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-httpd" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068320 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b4ba1e-a38e-421a-aa06-bf03b692df11" containerName="proxy-server" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068331 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="58afe4a3-4576-41ab-b798-62f8ebd2cf5b" containerName="registry-server" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.068877 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.078208 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2tvl7"] Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.215409 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9k9\" (UniqueName: \"kubernetes.io/projected/021f0ece-2f50-46bc-8380-de8e5d1e1c03-kube-api-access-fp9k9\") pod \"cinder-db-create-2tvl7\" (UID: \"021f0ece-2f50-46bc-8380-de8e5d1e1c03\") " pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.317318 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp9k9\" (UniqueName: \"kubernetes.io/projected/021f0ece-2f50-46bc-8380-de8e5d1e1c03-kube-api-access-fp9k9\") pod \"cinder-db-create-2tvl7\" (UID: \"021f0ece-2f50-46bc-8380-de8e5d1e1c03\") " pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.344765 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp9k9\" (UniqueName: \"kubernetes.io/projected/021f0ece-2f50-46bc-8380-de8e5d1e1c03-kube-api-access-fp9k9\") pod \"cinder-db-create-2tvl7\" (UID: \"021f0ece-2f50-46bc-8380-de8e5d1e1c03\") " pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.389997 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:17 crc kubenswrapper[4958]: I1008 08:10:17.687161 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2tvl7"] Oct 08 08:10:18 crc kubenswrapper[4958]: I1008 08:10:18.189232 4958 generic.go:334] "Generic (PLEG): container finished" podID="021f0ece-2f50-46bc-8380-de8e5d1e1c03" containerID="e7f2418f0d6d62c4fe1ffbd940f669ee2247c3ef374845dde4c44fd4ed60cdc0" exitCode=0 Oct 08 08:10:18 crc kubenswrapper[4958]: I1008 08:10:18.189512 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2tvl7" event={"ID":"021f0ece-2f50-46bc-8380-de8e5d1e1c03","Type":"ContainerDied","Data":"e7f2418f0d6d62c4fe1ffbd940f669ee2247c3ef374845dde4c44fd4ed60cdc0"} Oct 08 08:10:18 crc kubenswrapper[4958]: I1008 08:10:18.191176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2tvl7" event={"ID":"021f0ece-2f50-46bc-8380-de8e5d1e1c03","Type":"ContainerStarted","Data":"707c00f17f462d35273385e386d1ee78867c89837ec052d0a62798a4215132fd"} Oct 08 08:10:19 crc kubenswrapper[4958]: I1008 08:10:19.649898 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:19 crc kubenswrapper[4958]: I1008 08:10:19.774365 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp9k9\" (UniqueName: \"kubernetes.io/projected/021f0ece-2f50-46bc-8380-de8e5d1e1c03-kube-api-access-fp9k9\") pod \"021f0ece-2f50-46bc-8380-de8e5d1e1c03\" (UID: \"021f0ece-2f50-46bc-8380-de8e5d1e1c03\") " Oct 08 08:10:19 crc kubenswrapper[4958]: I1008 08:10:19.783302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021f0ece-2f50-46bc-8380-de8e5d1e1c03-kube-api-access-fp9k9" (OuterVolumeSpecName: "kube-api-access-fp9k9") pod "021f0ece-2f50-46bc-8380-de8e5d1e1c03" (UID: "021f0ece-2f50-46bc-8380-de8e5d1e1c03"). InnerVolumeSpecName "kube-api-access-fp9k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:10:19 crc kubenswrapper[4958]: I1008 08:10:19.876450 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp9k9\" (UniqueName: \"kubernetes.io/projected/021f0ece-2f50-46bc-8380-de8e5d1e1c03-kube-api-access-fp9k9\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:20 crc kubenswrapper[4958]: I1008 08:10:20.216616 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2tvl7" event={"ID":"021f0ece-2f50-46bc-8380-de8e5d1e1c03","Type":"ContainerDied","Data":"707c00f17f462d35273385e386d1ee78867c89837ec052d0a62798a4215132fd"} Oct 08 08:10:20 crc kubenswrapper[4958]: I1008 08:10:20.217028 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707c00f17f462d35273385e386d1ee78867c89837ec052d0a62798a4215132fd" Oct 08 08:10:20 crc kubenswrapper[4958]: I1008 08:10:20.216727 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2tvl7" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.210529 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ec34-account-create-dkrmc"] Oct 08 08:10:27 crc kubenswrapper[4958]: E1008 08:10:27.211523 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021f0ece-2f50-46bc-8380-de8e5d1e1c03" containerName="mariadb-database-create" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.211550 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="021f0ece-2f50-46bc-8380-de8e5d1e1c03" containerName="mariadb-database-create" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.211895 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="021f0ece-2f50-46bc-8380-de8e5d1e1c03" containerName="mariadb-database-create" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.213012 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.215243 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.226531 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ec34-account-create-dkrmc"] Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.348338 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fq7t\" (UniqueName: \"kubernetes.io/projected/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad-kube-api-access-7fq7t\") pod \"cinder-ec34-account-create-dkrmc\" (UID: \"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad\") " pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.451040 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fq7t\" (UniqueName: \"kubernetes.io/projected/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad-kube-api-access-7fq7t\") pod \"cinder-ec34-account-create-dkrmc\" (UID: \"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad\") " pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.484499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fq7t\" (UniqueName: \"kubernetes.io/projected/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad-kube-api-access-7fq7t\") pod \"cinder-ec34-account-create-dkrmc\" (UID: \"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad\") " pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.570496 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:27 crc kubenswrapper[4958]: I1008 08:10:27.854052 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ec34-account-create-dkrmc"] Oct 08 08:10:28 crc kubenswrapper[4958]: I1008 08:10:28.305723 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea17bac7-d2bd-4285-a9d2-184c1eeec4ad" containerID="65de011391af5deab285e20798bbe94d8c8f3cdb670d3735f1b866a47fe92b54" exitCode=0 Oct 08 08:10:28 crc kubenswrapper[4958]: I1008 08:10:28.305791 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ec34-account-create-dkrmc" event={"ID":"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad","Type":"ContainerDied","Data":"65de011391af5deab285e20798bbe94d8c8f3cdb670d3735f1b866a47fe92b54"} Oct 08 08:10:28 crc kubenswrapper[4958]: I1008 08:10:28.305832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ec34-account-create-dkrmc" event={"ID":"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad","Type":"ContainerStarted","Data":"49885944fb228feb4375e00b787ef0569913b10746232a35fb42bd0847c112cc"} Oct 08 08:10:29 crc kubenswrapper[4958]: I1008 08:10:29.689122 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:29 crc kubenswrapper[4958]: I1008 08:10:29.810529 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fq7t\" (UniqueName: \"kubernetes.io/projected/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad-kube-api-access-7fq7t\") pod \"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad\" (UID: \"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad\") " Oct 08 08:10:29 crc kubenswrapper[4958]: I1008 08:10:29.822305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad-kube-api-access-7fq7t" (OuterVolumeSpecName: "kube-api-access-7fq7t") pod "ea17bac7-d2bd-4285-a9d2-184c1eeec4ad" (UID: "ea17bac7-d2bd-4285-a9d2-184c1eeec4ad"). InnerVolumeSpecName "kube-api-access-7fq7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:10:29 crc kubenswrapper[4958]: I1008 08:10:29.912278 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fq7t\" (UniqueName: \"kubernetes.io/projected/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad-kube-api-access-7fq7t\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:30 crc kubenswrapper[4958]: I1008 08:10:30.328297 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ec34-account-create-dkrmc" event={"ID":"ea17bac7-d2bd-4285-a9d2-184c1eeec4ad","Type":"ContainerDied","Data":"49885944fb228feb4375e00b787ef0569913b10746232a35fb42bd0847c112cc"} Oct 08 08:10:30 crc kubenswrapper[4958]: I1008 08:10:30.328368 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49885944fb228feb4375e00b787ef0569913b10746232a35fb42bd0847c112cc" Oct 08 08:10:30 crc kubenswrapper[4958]: I1008 08:10:30.328451 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ec34-account-create-dkrmc" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.374247 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9x5c7"] Oct 08 08:10:32 crc kubenswrapper[4958]: E1008 08:10:32.374926 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea17bac7-d2bd-4285-a9d2-184c1eeec4ad" containerName="mariadb-account-create" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.374942 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea17bac7-d2bd-4285-a9d2-184c1eeec4ad" containerName="mariadb-account-create" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.375201 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea17bac7-d2bd-4285-a9d2-184c1eeec4ad" containerName="mariadb-account-create" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.375892 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.378332 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2z7mw" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.378415 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.378332 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.385404 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9x5c7"] Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.464927 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-config-data\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.465004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-combined-ca-bundle\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.465061 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-scripts\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.465171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-db-sync-config-data\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.465231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61952772-bb5b-4065-b0d6-ad86d8d246d5-etc-machine-id\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.465437 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fp6\" (UniqueName: \"kubernetes.io/projected/61952772-bb5b-4065-b0d6-ad86d8d246d5-kube-api-access-98fp6\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567048 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61952772-bb5b-4065-b0d6-ad86d8d246d5-etc-machine-id\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567202 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fp6\" (UniqueName: \"kubernetes.io/projected/61952772-bb5b-4065-b0d6-ad86d8d246d5-kube-api-access-98fp6\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61952772-bb5b-4065-b0d6-ad86d8d246d5-etc-machine-id\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-config-data\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567356 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-combined-ca-bundle\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-scripts\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.567661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-db-sync-config-data\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.574442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-config-data\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.574835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-scripts\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.576067 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-db-sync-config-data\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.581488 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-combined-ca-bundle\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.585084 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fp6\" (UniqueName: \"kubernetes.io/projected/61952772-bb5b-4065-b0d6-ad86d8d246d5-kube-api-access-98fp6\") pod \"cinder-db-sync-9x5c7\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:32 crc kubenswrapper[4958]: I1008 08:10:32.702397 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:33 crc kubenswrapper[4958]: I1008 08:10:33.176246 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9x5c7"] Oct 08 08:10:33 crc kubenswrapper[4958]: I1008 08:10:33.368523 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9x5c7" event={"ID":"61952772-bb5b-4065-b0d6-ad86d8d246d5","Type":"ContainerStarted","Data":"e0ad03a81e5a5b22a09b3be3e873959d7c4df3cec0bec2d7d09d3d48f94e153e"} Oct 08 08:10:34 crc kubenswrapper[4958]: I1008 08:10:34.383835 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9x5c7" event={"ID":"61952772-bb5b-4065-b0d6-ad86d8d246d5","Type":"ContainerStarted","Data":"f014ab82cf1c4c39feb07937d5167a4ef7aad791e15e6eb25b77ac02dd35a182"} Oct 08 08:10:34 crc kubenswrapper[4958]: I1008 08:10:34.416159 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9x5c7" podStartSLOduration=2.416135762 podStartE2EDuration="2.416135762s" podCreationTimestamp="2025-10-08 08:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:10:34.409250576 +0000 UTC m=+5777.538943217" watchObservedRunningTime="2025-10-08 08:10:34.416135762 +0000 UTC m=+5777.545828403" Oct 08 08:10:36 crc kubenswrapper[4958]: I1008 08:10:36.412612 4958 generic.go:334] "Generic (PLEG): container finished" podID="61952772-bb5b-4065-b0d6-ad86d8d246d5" containerID="f014ab82cf1c4c39feb07937d5167a4ef7aad791e15e6eb25b77ac02dd35a182" exitCode=0 Oct 08 08:10:36 crc kubenswrapper[4958]: I1008 08:10:36.412732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9x5c7" event={"ID":"61952772-bb5b-4065-b0d6-ad86d8d246d5","Type":"ContainerDied","Data":"f014ab82cf1c4c39feb07937d5167a4ef7aad791e15e6eb25b77ac02dd35a182"} Oct 08 08:10:36 crc kubenswrapper[4958]: I1008 08:10:36.844906 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:10:36 crc kubenswrapper[4958]: I1008 08:10:36.844990 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.848187 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.888408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-db-sync-config-data\") pod \"61952772-bb5b-4065-b0d6-ad86d8d246d5\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.888450 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-config-data\") pod \"61952772-bb5b-4065-b0d6-ad86d8d246d5\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.888547 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98fp6\" (UniqueName: \"kubernetes.io/projected/61952772-bb5b-4065-b0d6-ad86d8d246d5-kube-api-access-98fp6\") pod \"61952772-bb5b-4065-b0d6-ad86d8d246d5\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.888574 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-scripts\") pod \"61952772-bb5b-4065-b0d6-ad86d8d246d5\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.888634 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61952772-bb5b-4065-b0d6-ad86d8d246d5-etc-machine-id\") pod \"61952772-bb5b-4065-b0d6-ad86d8d246d5\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.888688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-combined-ca-bundle\") pod \"61952772-bb5b-4065-b0d6-ad86d8d246d5\" (UID: \"61952772-bb5b-4065-b0d6-ad86d8d246d5\") " Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.891128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61952772-bb5b-4065-b0d6-ad86d8d246d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "61952772-bb5b-4065-b0d6-ad86d8d246d5" (UID: "61952772-bb5b-4065-b0d6-ad86d8d246d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.898270 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61952772-bb5b-4065-b0d6-ad86d8d246d5-kube-api-access-98fp6" (OuterVolumeSpecName: "kube-api-access-98fp6") pod "61952772-bb5b-4065-b0d6-ad86d8d246d5" (UID: "61952772-bb5b-4065-b0d6-ad86d8d246d5"). InnerVolumeSpecName "kube-api-access-98fp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.899217 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-scripts" (OuterVolumeSpecName: "scripts") pod "61952772-bb5b-4065-b0d6-ad86d8d246d5" (UID: "61952772-bb5b-4065-b0d6-ad86d8d246d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.911507 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "61952772-bb5b-4065-b0d6-ad86d8d246d5" (UID: "61952772-bb5b-4065-b0d6-ad86d8d246d5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.953306 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61952772-bb5b-4065-b0d6-ad86d8d246d5" (UID: "61952772-bb5b-4065-b0d6-ad86d8d246d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.969345 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-config-data" (OuterVolumeSpecName: "config-data") pod "61952772-bb5b-4065-b0d6-ad86d8d246d5" (UID: "61952772-bb5b-4065-b0d6-ad86d8d246d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.990884 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98fp6\" (UniqueName: \"kubernetes.io/projected/61952772-bb5b-4065-b0d6-ad86d8d246d5-kube-api-access-98fp6\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.990902 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.990913 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61952772-bb5b-4065-b0d6-ad86d8d246d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.990922 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.990931 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:37.990939 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61952772-bb5b-4065-b0d6-ad86d8d246d5-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.437398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9x5c7" event={"ID":"61952772-bb5b-4065-b0d6-ad86d8d246d5","Type":"ContainerDied","Data":"e0ad03a81e5a5b22a09b3be3e873959d7c4df3cec0bec2d7d09d3d48f94e153e"} Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.437811 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ad03a81e5a5b22a09b3be3e873959d7c4df3cec0bec2d7d09d3d48f94e153e" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.437739 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9x5c7" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.790648 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dfb96c8bc-nlv9l"] Oct 08 08:10:38 crc kubenswrapper[4958]: E1008 08:10:38.791172 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61952772-bb5b-4065-b0d6-ad86d8d246d5" containerName="cinder-db-sync" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.791231 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="61952772-bb5b-4065-b0d6-ad86d8d246d5" containerName="cinder-db-sync" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.791425 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="61952772-bb5b-4065-b0d6-ad86d8d246d5" containerName="cinder-db-sync" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.792327 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.805058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlkhs\" (UniqueName: \"kubernetes.io/projected/ec426777-7768-4cce-abd8-2116ed2fb2e4-kube-api-access-hlkhs\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.805145 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.805194 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.805220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-dns-svc\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.805267 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-config\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.811803 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dfb96c8bc-nlv9l"] Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.906564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlkhs\" (UniqueName: \"kubernetes.io/projected/ec426777-7768-4cce-abd8-2116ed2fb2e4-kube-api-access-hlkhs\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.906673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.906724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.906756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-dns-svc\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.906791 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-config\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.907740 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-config\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.907909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-dns-svc\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.908076 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.908546 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-sb\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.926565 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlkhs\" (UniqueName: \"kubernetes.io/projected/ec426777-7768-4cce-abd8-2116ed2fb2e4-kube-api-access-hlkhs\") pod \"dnsmasq-dns-7dfb96c8bc-nlv9l\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.984698 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.987214 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.989705 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2z7mw" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.990151 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.990312 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 08:10:38 crc kubenswrapper[4958]: I1008 08:10:38.990449 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008192 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008230 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79119616-1f62-4b94-9726-d8ce02f4bcfe-logs\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008256 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79119616-1f62-4b94-9726-d8ce02f4bcfe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008315 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-scripts\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008342 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndq5\" (UniqueName: \"kubernetes.io/projected/79119616-1f62-4b94-9726-d8ce02f4bcfe-kube-api-access-7ndq5\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.008367 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data-custom\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.013608 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109472 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109516 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79119616-1f62-4b94-9726-d8ce02f4bcfe-logs\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79119616-1f62-4b94-9726-d8ce02f4bcfe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109632 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-scripts\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109662 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ndq5\" (UniqueName: \"kubernetes.io/projected/79119616-1f62-4b94-9726-d8ce02f4bcfe-kube-api-access-7ndq5\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109683 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79119616-1f62-4b94-9726-d8ce02f4bcfe-etc-machine-id\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.109984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79119616-1f62-4b94-9726-d8ce02f4bcfe-logs\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.110767 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.111328 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data-custom\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.114670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-scripts\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.115237 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.115832 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.119647 4958 scope.go:117] "RemoveContainer" containerID="c3c3db933ef819abbaebb9c0c4558a1523aa3b3997a9744605d3ef838e45067d" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.131494 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data-custom\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.134642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ndq5\" (UniqueName: \"kubernetes.io/projected/79119616-1f62-4b94-9726-d8ce02f4bcfe-kube-api-access-7ndq5\") pod \"cinder-api-0\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.253202 4958 scope.go:117] "RemoveContainer" containerID="821128a0d2e599be1bdd32c2c8a559c70063b122e09f91eb16a2677226975d35" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.336658 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.607796 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dfb96c8bc-nlv9l"] Oct 08 08:10:39 crc kubenswrapper[4958]: W1008 08:10:39.807686 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79119616_1f62_4b94_9726_d8ce02f4bcfe.slice/crio-49ca17c9ef59e0308d57eeb07f6f8295e19ed49d56b2399a1615664ca6457694 WatchSource:0}: Error finding container 49ca17c9ef59e0308d57eeb07f6f8295e19ed49d56b2399a1615664ca6457694: Status 404 returned error can't find the container with id 49ca17c9ef59e0308d57eeb07f6f8295e19ed49d56b2399a1615664ca6457694 Oct 08 08:10:39 crc kubenswrapper[4958]: I1008 08:10:39.808850 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:40 crc kubenswrapper[4958]: I1008 08:10:40.459684 4958 generic.go:334] "Generic (PLEG): container finished" podID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerID="1653657e8046bc3cd484fd1d9e373df86d157655b4027134bc565c332fd3fe2b" exitCode=0 Oct 08 08:10:40 crc kubenswrapper[4958]: I1008 08:10:40.459729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" event={"ID":"ec426777-7768-4cce-abd8-2116ed2fb2e4","Type":"ContainerDied","Data":"1653657e8046bc3cd484fd1d9e373df86d157655b4027134bc565c332fd3fe2b"} Oct 08 08:10:40 crc kubenswrapper[4958]: I1008 08:10:40.460023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" event={"ID":"ec426777-7768-4cce-abd8-2116ed2fb2e4","Type":"ContainerStarted","Data":"a7fc714aa80f4ca44fb8246d9cae004db0df1aede3268e4937cef0bed664ce25"} Oct 08 08:10:40 crc kubenswrapper[4958]: I1008 08:10:40.462574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79119616-1f62-4b94-9726-d8ce02f4bcfe","Type":"ContainerStarted","Data":"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286"} Oct 08 08:10:40 crc kubenswrapper[4958]: I1008 08:10:40.462612 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79119616-1f62-4b94-9726-d8ce02f4bcfe","Type":"ContainerStarted","Data":"49ca17c9ef59e0308d57eeb07f6f8295e19ed49d56b2399a1615664ca6457694"} Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.398549 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.472313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" event={"ID":"ec426777-7768-4cce-abd8-2116ed2fb2e4","Type":"ContainerStarted","Data":"f4b1d693a8d3400b519696978093c97e505eae4a114b70c0ac7bde3e3a0375b0"} Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.472520 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.474883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79119616-1f62-4b94-9726-d8ce02f4bcfe","Type":"ContainerStarted","Data":"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e"} Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.475059 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.500253 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" podStartSLOduration=3.5002297049999997 podStartE2EDuration="3.500229705s" podCreationTimestamp="2025-10-08 08:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:10:41.493574054 +0000 UTC m=+5784.623266685" watchObservedRunningTime="2025-10-08 08:10:41.500229705 +0000 UTC m=+5784.629922326" Oct 08 08:10:41 crc kubenswrapper[4958]: I1008 08:10:41.513582 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.513559436 podStartE2EDuration="3.513559436s" podCreationTimestamp="2025-10-08 08:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:10:41.506832064 +0000 UTC m=+5784.636524665" watchObservedRunningTime="2025-10-08 08:10:41.513559436 +0000 UTC m=+5784.643252047" Oct 08 08:10:42 crc kubenswrapper[4958]: I1008 08:10:42.484184 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api-log" containerID="cri-o://db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286" gracePeriod=30 Oct 08 08:10:42 crc kubenswrapper[4958]: I1008 08:10:42.484696 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api" containerID="cri-o://f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e" gracePeriod=30 Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.093740 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238227 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79119616-1f62-4b94-9726-d8ce02f4bcfe-etc-machine-id\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238471 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79119616-1f62-4b94-9726-d8ce02f4bcfe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238481 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-combined-ca-bundle\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238592 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79119616-1f62-4b94-9726-d8ce02f4bcfe-logs\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238619 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ndq5\" (UniqueName: \"kubernetes.io/projected/79119616-1f62-4b94-9726-d8ce02f4bcfe-kube-api-access-7ndq5\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238670 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data-custom\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.238842 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-scripts\") pod \"79119616-1f62-4b94-9726-d8ce02f4bcfe\" (UID: \"79119616-1f62-4b94-9726-d8ce02f4bcfe\") " Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.239391 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79119616-1f62-4b94-9726-d8ce02f4bcfe-logs" (OuterVolumeSpecName: "logs") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.240172 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79119616-1f62-4b94-9726-d8ce02f4bcfe-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.240192 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79119616-1f62-4b94-9726-d8ce02f4bcfe-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.245419 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.251162 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79119616-1f62-4b94-9726-d8ce02f4bcfe-kube-api-access-7ndq5" (OuterVolumeSpecName: "kube-api-access-7ndq5") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "kube-api-access-7ndq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.259118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-scripts" (OuterVolumeSpecName: "scripts") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.284996 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.312684 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data" (OuterVolumeSpecName: "config-data") pod "79119616-1f62-4b94-9726-d8ce02f4bcfe" (UID: "79119616-1f62-4b94-9726-d8ce02f4bcfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.341531 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.341574 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ndq5\" (UniqueName: \"kubernetes.io/projected/79119616-1f62-4b94-9726-d8ce02f4bcfe-kube-api-access-7ndq5\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.341589 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.341604 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.341615 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79119616-1f62-4b94-9726-d8ce02f4bcfe-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497713 4958 generic.go:334] "Generic (PLEG): container finished" podID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerID="f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e" exitCode=0 Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497754 4958 generic.go:334] "Generic (PLEG): container finished" podID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerID="db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286" exitCode=143 Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497772 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79119616-1f62-4b94-9726-d8ce02f4bcfe","Type":"ContainerDied","Data":"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e"} Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497866 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79119616-1f62-4b94-9726-d8ce02f4bcfe","Type":"ContainerDied","Data":"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286"} Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"79119616-1f62-4b94-9726-d8ce02f4bcfe","Type":"ContainerDied","Data":"49ca17c9ef59e0308d57eeb07f6f8295e19ed49d56b2399a1615664ca6457694"} Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.497903 4958 scope.go:117] "RemoveContainer" containerID="f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.556585 4958 scope.go:117] "RemoveContainer" containerID="db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.564364 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.592766 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.597653 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:43 crc kubenswrapper[4958]: E1008 08:10:43.598205 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api-log" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.598272 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api-log" Oct 08 08:10:43 crc kubenswrapper[4958]: E1008 08:10:43.598371 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.598420 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.598632 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api-log" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.598692 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" containerName="cinder-api" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.599641 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.602579 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2z7mw" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.604574 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.604698 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.605022 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.605505 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.605724 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.607720 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.613175 4958 scope.go:117] "RemoveContainer" containerID="f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e" Oct 08 08:10:43 crc kubenswrapper[4958]: E1008 08:10:43.616517 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e\": container with ID starting with f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e not found: ID does not exist" containerID="f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.616575 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e"} err="failed to get container status \"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e\": rpc error: code = NotFound desc = could not find container \"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e\": container with ID starting with f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e not found: ID does not exist" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.616599 4958 scope.go:117] "RemoveContainer" containerID="db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286" Oct 08 08:10:43 crc kubenswrapper[4958]: E1008 08:10:43.621360 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286\": container with ID starting with db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286 not found: ID does not exist" containerID="db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.621384 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286"} err="failed to get container status \"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286\": rpc error: code = NotFound desc = could not find container \"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286\": container with ID starting with db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286 not found: ID does not exist" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.621401 4958 scope.go:117] "RemoveContainer" containerID="f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.622467 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e"} err="failed to get container status \"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e\": rpc error: code = NotFound desc = could not find container \"f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e\": container with ID starting with f14bfe47210c183b42fb2ff3205930876d77039fc545bbd2818b90402d33958e not found: ID does not exist" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.622487 4958 scope.go:117] "RemoveContainer" containerID="db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.626186 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286"} err="failed to get container status \"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286\": rpc error: code = NotFound desc = could not find container \"db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286\": container with ID starting with db0a56e7e4d791fb6ecad3fd2a64c4a8ef8e23d96d990080c0e7b8279307f286 not found: ID does not exist" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.750575 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.750631 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f8e13c-33c9-4b8a-8b46-06610dd0c091-logs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.750831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-scripts\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.751040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.751166 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data-custom\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.751200 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.751226 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28f8e13c-33c9-4b8a-8b46-06610dd0c091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.751300 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sq6z\" (UniqueName: \"kubernetes.io/projected/28f8e13c-33c9-4b8a-8b46-06610dd0c091-kube-api-access-5sq6z\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.751501 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.853868 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-scripts\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.853943 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854025 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data-custom\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854045 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854062 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28f8e13c-33c9-4b8a-8b46-06610dd0c091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sq6z\" (UniqueName: \"kubernetes.io/projected/28f8e13c-33c9-4b8a-8b46-06610dd0c091-kube-api-access-5sq6z\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854153 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854172 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f8e13c-33c9-4b8a-8b46-06610dd0c091-logs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28f8e13c-33c9-4b8a-8b46-06610dd0c091-etc-machine-id\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.854887 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f8e13c-33c9-4b8a-8b46-06610dd0c091-logs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.862460 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.862661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.864300 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.864676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data-custom\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.867462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-public-tls-certs\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.872441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-scripts\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.875749 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sq6z\" (UniqueName: \"kubernetes.io/projected/28f8e13c-33c9-4b8a-8b46-06610dd0c091-kube-api-access-5sq6z\") pod \"cinder-api-0\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " pod="openstack/cinder-api-0" Oct 08 08:10:43 crc kubenswrapper[4958]: I1008 08:10:43.950171 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:10:44 crc kubenswrapper[4958]: I1008 08:10:44.487860 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:10:44 crc kubenswrapper[4958]: W1008 08:10:44.497375 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28f8e13c_33c9_4b8a_8b46_06610dd0c091.slice/crio-fa8e3d6b51667b336bad019678680736715d5ab898c9b4ee8c21c8c844d8b4f1 WatchSource:0}: Error finding container fa8e3d6b51667b336bad019678680736715d5ab898c9b4ee8c21c8c844d8b4f1: Status 404 returned error can't find the container with id fa8e3d6b51667b336bad019678680736715d5ab898c9b4ee8c21c8c844d8b4f1 Oct 08 08:10:44 crc kubenswrapper[4958]: I1008 08:10:44.508264 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28f8e13c-33c9-4b8a-8b46-06610dd0c091","Type":"ContainerStarted","Data":"fa8e3d6b51667b336bad019678680736715d5ab898c9b4ee8c21c8c844d8b4f1"} Oct 08 08:10:45 crc kubenswrapper[4958]: I1008 08:10:45.527114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28f8e13c-33c9-4b8a-8b46-06610dd0c091","Type":"ContainerStarted","Data":"970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0"} Oct 08 08:10:45 crc kubenswrapper[4958]: I1008 08:10:45.594615 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79119616-1f62-4b94-9726-d8ce02f4bcfe" path="/var/lib/kubelet/pods/79119616-1f62-4b94-9726-d8ce02f4bcfe/volumes" Oct 08 08:10:46 crc kubenswrapper[4958]: I1008 08:10:46.545532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28f8e13c-33c9-4b8a-8b46-06610dd0c091","Type":"ContainerStarted","Data":"fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3"} Oct 08 08:10:46 crc kubenswrapper[4958]: I1008 08:10:46.546065 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 08:10:46 crc kubenswrapper[4958]: I1008 08:10:46.599230 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.599199906 podStartE2EDuration="3.599199906s" podCreationTimestamp="2025-10-08 08:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:10:46.578250628 +0000 UTC m=+5789.707943269" watchObservedRunningTime="2025-10-08 08:10:46.599199906 +0000 UTC m=+5789.728892547" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.114048 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.214503 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bcf6c4c-jb9md"] Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.214877 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerName="dnsmasq-dns" containerID="cri-o://2ddded6f892dd3aa1824daedf3993f373a052152864a0830ad2deadbab944233" gracePeriod=10 Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.594919 4958 generic.go:334] "Generic (PLEG): container finished" podID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerID="2ddded6f892dd3aa1824daedf3993f373a052152864a0830ad2deadbab944233" exitCode=0 Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.595178 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" event={"ID":"55533c86-b6a5-4edb-a85f-c370730e8c4c","Type":"ContainerDied","Data":"2ddded6f892dd3aa1824daedf3993f373a052152864a0830ad2deadbab944233"} Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.790388 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.888913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-dns-svc\") pod \"55533c86-b6a5-4edb-a85f-c370730e8c4c\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.889307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnvqf\" (UniqueName: \"kubernetes.io/projected/55533c86-b6a5-4edb-a85f-c370730e8c4c-kube-api-access-bnvqf\") pod \"55533c86-b6a5-4edb-a85f-c370730e8c4c\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.890118 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-nb\") pod \"55533c86-b6a5-4edb-a85f-c370730e8c4c\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.890257 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-sb\") pod \"55533c86-b6a5-4edb-a85f-c370730e8c4c\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.890286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-config\") pod \"55533c86-b6a5-4edb-a85f-c370730e8c4c\" (UID: \"55533c86-b6a5-4edb-a85f-c370730e8c4c\") " Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.894847 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55533c86-b6a5-4edb-a85f-c370730e8c4c-kube-api-access-bnvqf" (OuterVolumeSpecName: "kube-api-access-bnvqf") pod "55533c86-b6a5-4edb-a85f-c370730e8c4c" (UID: "55533c86-b6a5-4edb-a85f-c370730e8c4c"). InnerVolumeSpecName "kube-api-access-bnvqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.936720 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55533c86-b6a5-4edb-a85f-c370730e8c4c" (UID: "55533c86-b6a5-4edb-a85f-c370730e8c4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.937008 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-config" (OuterVolumeSpecName: "config") pod "55533c86-b6a5-4edb-a85f-c370730e8c4c" (UID: "55533c86-b6a5-4edb-a85f-c370730e8c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.943252 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55533c86-b6a5-4edb-a85f-c370730e8c4c" (UID: "55533c86-b6a5-4edb-a85f-c370730e8c4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.946468 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55533c86-b6a5-4edb-a85f-c370730e8c4c" (UID: "55533c86-b6a5-4edb-a85f-c370730e8c4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.992194 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.992253 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnvqf\" (UniqueName: \"kubernetes.io/projected/55533c86-b6a5-4edb-a85f-c370730e8c4c-kube-api-access-bnvqf\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.992303 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.992312 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:49 crc kubenswrapper[4958]: I1008 08:10:49.992344 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55533c86-b6a5-4edb-a85f-c370730e8c4c-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:10:50 crc kubenswrapper[4958]: I1008 08:10:50.613907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" event={"ID":"55533c86-b6a5-4edb-a85f-c370730e8c4c","Type":"ContainerDied","Data":"42aa5bcc7d7e669b7a22c2b11e774e16394dfbffcc55708c78e0d33d88c57d91"} Oct 08 08:10:50 crc kubenswrapper[4958]: I1008 08:10:50.614001 4958 scope.go:117] "RemoveContainer" containerID="2ddded6f892dd3aa1824daedf3993f373a052152864a0830ad2deadbab944233" Oct 08 08:10:50 crc kubenswrapper[4958]: I1008 08:10:50.614016 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f6bcf6c4c-jb9md" Oct 08 08:10:50 crc kubenswrapper[4958]: I1008 08:10:50.661904 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f6bcf6c4c-jb9md"] Oct 08 08:10:50 crc kubenswrapper[4958]: I1008 08:10:50.662243 4958 scope.go:117] "RemoveContainer" containerID="880f10414aad22e70e942a128a84957d78afa2889a6719d73010a4edbca2f17b" Oct 08 08:10:50 crc kubenswrapper[4958]: I1008 08:10:50.668674 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f6bcf6c4c-jb9md"] Oct 08 08:10:51 crc kubenswrapper[4958]: I1008 08:10:51.601465 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" path="/var/lib/kubelet/pods/55533c86-b6a5-4edb-a85f-c370730e8c4c/volumes" Oct 08 08:10:55 crc kubenswrapper[4958]: I1008 08:10:55.661236 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 08 08:11:06 crc kubenswrapper[4958]: I1008 08:11:06.845075 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:11:06 crc kubenswrapper[4958]: I1008 08:11:06.845675 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:11:06 crc kubenswrapper[4958]: I1008 08:11:06.845739 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:11:06 crc kubenswrapper[4958]: I1008 08:11:06.846739 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99e2aedb896ded61b3f3ce708c02241cc82327087174ed41010302ce866d005a"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:11:06 crc kubenswrapper[4958]: I1008 08:11:06.846827 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://99e2aedb896ded61b3f3ce708c02241cc82327087174ed41010302ce866d005a" gracePeriod=600 Oct 08 08:11:07 crc kubenswrapper[4958]: I1008 08:11:07.808263 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="99e2aedb896ded61b3f3ce708c02241cc82327087174ed41010302ce866d005a" exitCode=0 Oct 08 08:11:07 crc kubenswrapper[4958]: I1008 08:11:07.808384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"99e2aedb896ded61b3f3ce708c02241cc82327087174ed41010302ce866d005a"} Oct 08 08:11:07 crc kubenswrapper[4958]: I1008 08:11:07.808769 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f"} Oct 08 08:11:07 crc kubenswrapper[4958]: I1008 08:11:07.808813 4958 scope.go:117] "RemoveContainer" containerID="b921b105fff038d423b1f62e10f816c98b091a292f01842c9754a9f758b8a217" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.424696 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:13 crc kubenswrapper[4958]: E1008 08:11:13.425718 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerName="dnsmasq-dns" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.425735 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerName="dnsmasq-dns" Oct 08 08:11:13 crc kubenswrapper[4958]: E1008 08:11:13.425777 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerName="init" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.425785 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerName="init" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.426008 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="55533c86-b6a5-4edb-a85f-c370730e8c4c" containerName="dnsmasq-dns" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.427098 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.436200 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.452030 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.588422 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.588632 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.588816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.588885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-scripts\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.589189 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79638794-d019-4140-abfd-f688211cd51d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.589399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvt22\" (UniqueName: \"kubernetes.io/projected/79638794-d019-4140-abfd-f688211cd51d-kube-api-access-kvt22\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691350 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvt22\" (UniqueName: \"kubernetes.io/projected/79638794-d019-4140-abfd-f688211cd51d-kube-api-access-kvt22\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691522 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691676 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-scripts\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79638794-d019-4140-abfd-f688211cd51d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.691903 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79638794-d019-4140-abfd-f688211cd51d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.699839 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.700701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.700776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-scripts\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.701605 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.723794 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvt22\" (UniqueName: \"kubernetes.io/projected/79638794-d019-4140-abfd-f688211cd51d-kube-api-access-kvt22\") pod \"cinder-scheduler-0\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:13 crc kubenswrapper[4958]: I1008 08:11:13.755204 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 08:11:14 crc kubenswrapper[4958]: I1008 08:11:14.249275 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:14 crc kubenswrapper[4958]: I1008 08:11:14.891784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79638794-d019-4140-abfd-f688211cd51d","Type":"ContainerStarted","Data":"aef99a92545ca20cfbb7e027d0fd0bbbdbca8e4f9b9d91efb0f8fcdbf28b2197"} Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.262732 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.263358 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api-log" containerID="cri-o://970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0" gracePeriod=30 Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.263439 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api" containerID="cri-o://fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3" gracePeriod=30 Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.922014 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28f8e13c-33c9-4b8a-8b46-06610dd0c091","Type":"ContainerDied","Data":"970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0"} Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.921940 4958 generic.go:334] "Generic (PLEG): container finished" podID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerID="970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0" exitCode=143 Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.930615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79638794-d019-4140-abfd-f688211cd51d","Type":"ContainerStarted","Data":"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28"} Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.930664 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79638794-d019-4140-abfd-f688211cd51d","Type":"ContainerStarted","Data":"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835"} Oct 08 08:11:15 crc kubenswrapper[4958]: I1008 08:11:15.953252 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.953225619 podStartE2EDuration="2.953225619s" podCreationTimestamp="2025-10-08 08:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:11:15.952811538 +0000 UTC m=+5819.082504149" watchObservedRunningTime="2025-10-08 08:11:15.953225619 +0000 UTC m=+5819.082918260" Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.755474 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.950243 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.965125 4958 generic.go:334] "Generic (PLEG): container finished" podID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerID="fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3" exitCode=0 Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.965169 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28f8e13c-33c9-4b8a-8b46-06610dd0c091","Type":"ContainerDied","Data":"fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3"} Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.965197 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28f8e13c-33c9-4b8a-8b46-06610dd0c091","Type":"ContainerDied","Data":"fa8e3d6b51667b336bad019678680736715d5ab898c9b4ee8c21c8c844d8b4f1"} Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.965216 4958 scope.go:117] "RemoveContainer" containerID="fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3" Oct 08 08:11:18 crc kubenswrapper[4958]: I1008 08:11:18.965279 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.019305 4958 scope.go:117] "RemoveContainer" containerID="970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.044555 4958 scope.go:117] "RemoveContainer" containerID="fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3" Oct 08 08:11:19 crc kubenswrapper[4958]: E1008 08:11:19.045176 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3\": container with ID starting with fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3 not found: ID does not exist" containerID="fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.045244 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3"} err="failed to get container status \"fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3\": rpc error: code = NotFound desc = could not find container \"fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3\": container with ID starting with fa007ac4178468692e490ce99df1707f4595b46f5403e13bfd7e4f8ce1332aa3 not found: ID does not exist" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.045273 4958 scope.go:117] "RemoveContainer" containerID="970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0" Oct 08 08:11:19 crc kubenswrapper[4958]: E1008 08:11:19.045970 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0\": container with ID starting with 970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0 not found: ID does not exist" containerID="970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.046013 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0"} err="failed to get container status \"970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0\": rpc error: code = NotFound desc = could not find container \"970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0\": container with ID starting with 970bc214f96d521d18cc185bf6429e01f3cf18964870c906be15c337fe779fe0 not found: ID does not exist" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.102940 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-combined-ca-bundle\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.103071 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-scripts\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.103112 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-public-tls-certs\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.103166 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data-custom\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.103243 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.104630 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f8e13c-33c9-4b8a-8b46-06610dd0c091-logs\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.104686 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28f8e13c-33c9-4b8a-8b46-06610dd0c091-etc-machine-id\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.104802 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sq6z\" (UniqueName: \"kubernetes.io/projected/28f8e13c-33c9-4b8a-8b46-06610dd0c091-kube-api-access-5sq6z\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.104910 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28f8e13c-33c9-4b8a-8b46-06610dd0c091-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.105066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-internal-tls-certs\") pod \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\" (UID: \"28f8e13c-33c9-4b8a-8b46-06610dd0c091\") " Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.105141 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28f8e13c-33c9-4b8a-8b46-06610dd0c091-logs" (OuterVolumeSpecName: "logs") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.105559 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28f8e13c-33c9-4b8a-8b46-06610dd0c091-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.105584 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28f8e13c-33c9-4b8a-8b46-06610dd0c091-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.111189 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.123129 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f8e13c-33c9-4b8a-8b46-06610dd0c091-kube-api-access-5sq6z" (OuterVolumeSpecName: "kube-api-access-5sq6z") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "kube-api-access-5sq6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.123321 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-scripts" (OuterVolumeSpecName: "scripts") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.147664 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.175188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.178838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.181751 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data" (OuterVolumeSpecName: "config-data") pod "28f8e13c-33c9-4b8a-8b46-06610dd0c091" (UID: "28f8e13c-33c9-4b8a-8b46-06610dd0c091"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.206971 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.207012 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.207022 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.207031 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.207040 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.207048 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sq6z\" (UniqueName: \"kubernetes.io/projected/28f8e13c-33c9-4b8a-8b46-06610dd0c091-kube-api-access-5sq6z\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.207059 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28f8e13c-33c9-4b8a-8b46-06610dd0c091-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.301940 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.314910 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.332993 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:11:19 crc kubenswrapper[4958]: E1008 08:11:19.333380 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.333400 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api" Oct 08 08:11:19 crc kubenswrapper[4958]: E1008 08:11:19.333429 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api-log" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.333437 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api-log" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.333605 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.333619 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" containerName="cinder-api-log" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.334552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.337063 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.337153 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.337495 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.344520 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.514789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6148a060-757f-4ab5-8cf2-e3f378a6ce59-logs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.514833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.514869 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8v58\" (UniqueName: \"kubernetes.io/projected/6148a060-757f-4ab5-8cf2-e3f378a6ce59-kube-api-access-x8v58\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.514925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-config-data-custom\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.514977 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.515110 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6148a060-757f-4ab5-8cf2-e3f378a6ce59-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.515508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-scripts\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.515709 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.515824 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-config-data\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.594836 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f8e13c-33c9-4b8a-8b46-06610dd0c091" path="/var/lib/kubelet/pods/28f8e13c-33c9-4b8a-8b46-06610dd0c091/volumes" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.617848 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618136 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-config-data\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6148a060-757f-4ab5-8cf2-e3f378a6ce59-logs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8v58\" (UniqueName: \"kubernetes.io/projected/6148a060-757f-4ab5-8cf2-e3f378a6ce59-kube-api-access-x8v58\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-config-data-custom\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618323 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6148a060-757f-4ab5-8cf2-e3f378a6ce59-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.618368 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-scripts\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.620229 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6148a060-757f-4ab5-8cf2-e3f378a6ce59-logs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.620343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6148a060-757f-4ab5-8cf2-e3f378a6ce59-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.623729 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.625607 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-scripts\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.626769 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.629971 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-config-data\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.633275 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-config-data-custom\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.637441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6148a060-757f-4ab5-8cf2-e3f378a6ce59-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.646741 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8v58\" (UniqueName: \"kubernetes.io/projected/6148a060-757f-4ab5-8cf2-e3f378a6ce59-kube-api-access-x8v58\") pod \"cinder-api-0\" (UID: \"6148a060-757f-4ab5-8cf2-e3f378a6ce59\") " pod="openstack/cinder-api-0" Oct 08 08:11:19 crc kubenswrapper[4958]: I1008 08:11:19.688086 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 08 08:11:20 crc kubenswrapper[4958]: I1008 08:11:20.174793 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 08 08:11:20 crc kubenswrapper[4958]: W1008 08:11:20.178563 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6148a060_757f_4ab5_8cf2_e3f378a6ce59.slice/crio-6eeb0d537d513ce629808d2f701e3a9533c4a012107ab356f06b3a6e43d65d04 WatchSource:0}: Error finding container 6eeb0d537d513ce629808d2f701e3a9533c4a012107ab356f06b3a6e43d65d04: Status 404 returned error can't find the container with id 6eeb0d537d513ce629808d2f701e3a9533c4a012107ab356f06b3a6e43d65d04 Oct 08 08:11:20 crc kubenswrapper[4958]: I1008 08:11:20.990657 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6148a060-757f-4ab5-8cf2-e3f378a6ce59","Type":"ContainerStarted","Data":"c271b36be47b06ddbaffbc03a6e4bf066ae0231146d2548cda5abb7dede6448a"} Oct 08 08:11:20 crc kubenswrapper[4958]: I1008 08:11:20.991265 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6148a060-757f-4ab5-8cf2-e3f378a6ce59","Type":"ContainerStarted","Data":"6eeb0d537d513ce629808d2f701e3a9533c4a012107ab356f06b3a6e43d65d04"} Oct 08 08:11:22 crc kubenswrapper[4958]: I1008 08:11:22.003851 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6148a060-757f-4ab5-8cf2-e3f378a6ce59","Type":"ContainerStarted","Data":"28de3481357fd6118a7ec19d78418180fa9f94211c0c24edda89eb216bc538b8"} Oct 08 08:11:22 crc kubenswrapper[4958]: I1008 08:11:22.004236 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 08 08:11:22 crc kubenswrapper[4958]: I1008 08:11:22.041994 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.041926272 podStartE2EDuration="3.041926272s" podCreationTimestamp="2025-10-08 08:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:11:22.029937127 +0000 UTC m=+5825.159629768" watchObservedRunningTime="2025-10-08 08:11:22.041926272 +0000 UTC m=+5825.171618903" Oct 08 08:11:24 crc kubenswrapper[4958]: I1008 08:11:24.052613 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 08:11:24 crc kubenswrapper[4958]: I1008 08:11:24.145567 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:25 crc kubenswrapper[4958]: I1008 08:11:25.037365 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="cinder-scheduler" containerID="cri-o://708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835" gracePeriod=30 Oct 08 08:11:25 crc kubenswrapper[4958]: I1008 08:11:25.037457 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="probe" containerID="cri-o://fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28" gracePeriod=30 Oct 08 08:11:25 crc kubenswrapper[4958]: I1008 08:11:25.999166 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.072930 4958 generic.go:334] "Generic (PLEG): container finished" podID="79638794-d019-4140-abfd-f688211cd51d" containerID="fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28" exitCode=0 Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.072991 4958 generic.go:334] "Generic (PLEG): container finished" podID="79638794-d019-4140-abfd-f688211cd51d" containerID="708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835" exitCode=0 Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.073018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79638794-d019-4140-abfd-f688211cd51d","Type":"ContainerDied","Data":"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28"} Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.073049 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79638794-d019-4140-abfd-f688211cd51d","Type":"ContainerDied","Data":"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835"} Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.073063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79638794-d019-4140-abfd-f688211cd51d","Type":"ContainerDied","Data":"aef99a92545ca20cfbb7e027d0fd0bbbdbca8e4f9b9d91efb0f8fcdbf28b2197"} Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.073083 4958 scope.go:117] "RemoveContainer" containerID="fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.073123 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.106149 4958 scope.go:117] "RemoveContainer" containerID="708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.123694 4958 scope.go:117] "RemoveContainer" containerID="fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28" Oct 08 08:11:26 crc kubenswrapper[4958]: E1008 08:11:26.124286 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28\": container with ID starting with fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28 not found: ID does not exist" containerID="fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.124354 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28"} err="failed to get container status \"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28\": rpc error: code = NotFound desc = could not find container \"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28\": container with ID starting with fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28 not found: ID does not exist" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.124408 4958 scope.go:117] "RemoveContainer" containerID="708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835" Oct 08 08:11:26 crc kubenswrapper[4958]: E1008 08:11:26.124874 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835\": container with ID starting with 708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835 not found: ID does not exist" containerID="708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.124916 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835"} err="failed to get container status \"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835\": rpc error: code = NotFound desc = could not find container \"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835\": container with ID starting with 708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835 not found: ID does not exist" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.124955 4958 scope.go:117] "RemoveContainer" containerID="fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.125300 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28"} err="failed to get container status \"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28\": rpc error: code = NotFound desc = could not find container \"fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28\": container with ID starting with fdc2fbcd9a44f97dd3a068e150a9ef594d83bd65e0eeb6547fc6a83d69c77d28 not found: ID does not exist" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.125336 4958 scope.go:117] "RemoveContainer" containerID="708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.125592 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835"} err="failed to get container status \"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835\": rpc error: code = NotFound desc = could not find container \"708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835\": container with ID starting with 708d7e2fa0f86c22a730cf9ec87dd3b7e71c4886a2e55fc06f4248d2f70ea835 not found: ID does not exist" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153349 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data-custom\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153410 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-scripts\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153441 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-combined-ca-bundle\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153541 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79638794-d019-4140-abfd-f688211cd51d-etc-machine-id\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153628 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153656 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvt22\" (UniqueName: \"kubernetes.io/projected/79638794-d019-4140-abfd-f688211cd51d-kube-api-access-kvt22\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.153806 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79638794-d019-4140-abfd-f688211cd51d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.154084 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79638794-d019-4140-abfd-f688211cd51d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.159094 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-scripts" (OuterVolumeSpecName: "scripts") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.159204 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79638794-d019-4140-abfd-f688211cd51d-kube-api-access-kvt22" (OuterVolumeSpecName: "kube-api-access-kvt22") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "kube-api-access-kvt22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.169880 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.206910 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.256544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data" (OuterVolumeSpecName: "config-data") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.260246 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data\") pod \"79638794-d019-4140-abfd-f688211cd51d\" (UID: \"79638794-d019-4140-abfd-f688211cd51d\") " Oct 08 08:11:26 crc kubenswrapper[4958]: W1008 08:11:26.260335 4958 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/79638794-d019-4140-abfd-f688211cd51d/volumes/kubernetes.io~secret/config-data Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.260349 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data" (OuterVolumeSpecName: "config-data") pod "79638794-d019-4140-abfd-f688211cd51d" (UID: "79638794-d019-4140-abfd-f688211cd51d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.263094 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.263119 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvt22\" (UniqueName: \"kubernetes.io/projected/79638794-d019-4140-abfd-f688211cd51d-kube-api-access-kvt22\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.263135 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.263144 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.263151 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79638794-d019-4140-abfd-f688211cd51d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.433113 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.440745 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.449247 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:26 crc kubenswrapper[4958]: E1008 08:11:26.449918 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="cinder-scheduler" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.450008 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="cinder-scheduler" Oct 08 08:11:26 crc kubenswrapper[4958]: E1008 08:11:26.450059 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="probe" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.450081 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="probe" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.450372 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="cinder-scheduler" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.450425 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="79638794-d019-4140-abfd-f688211cd51d" containerName="probe" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.452023 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.455478 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.457299 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.569016 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.569252 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvfp5\" (UniqueName: \"kubernetes.io/projected/e0aacf99-afe9-4f82-aabe-91eca23794e2-kube-api-access-nvfp5\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.569382 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.569425 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0aacf99-afe9-4f82-aabe-91eca23794e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.569483 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.569858 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.671514 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvfp5\" (UniqueName: \"kubernetes.io/projected/e0aacf99-afe9-4f82-aabe-91eca23794e2-kube-api-access-nvfp5\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.671622 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0aacf99-afe9-4f82-aabe-91eca23794e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.671657 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.671707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.671857 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0aacf99-afe9-4f82-aabe-91eca23794e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.671914 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.672122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.676025 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.676219 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.676891 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.677016 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aacf99-afe9-4f82-aabe-91eca23794e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.692272 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvfp5\" (UniqueName: \"kubernetes.io/projected/e0aacf99-afe9-4f82-aabe-91eca23794e2-kube-api-access-nvfp5\") pod \"cinder-scheduler-0\" (UID: \"e0aacf99-afe9-4f82-aabe-91eca23794e2\") " pod="openstack/cinder-scheduler-0" Oct 08 08:11:26 crc kubenswrapper[4958]: I1008 08:11:26.827673 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 08 08:11:27 crc kubenswrapper[4958]: W1008 08:11:27.119555 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0aacf99_afe9_4f82_aabe_91eca23794e2.slice/crio-31c9ca387398f8c2a40a08f580e5403a1a5d4563a804b86bd72af695d32b62d7 WatchSource:0}: Error finding container 31c9ca387398f8c2a40a08f580e5403a1a5d4563a804b86bd72af695d32b62d7: Status 404 returned error can't find the container with id 31c9ca387398f8c2a40a08f580e5403a1a5d4563a804b86bd72af695d32b62d7 Oct 08 08:11:27 crc kubenswrapper[4958]: I1008 08:11:27.123041 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 08 08:11:27 crc kubenswrapper[4958]: I1008 08:11:27.590453 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79638794-d019-4140-abfd-f688211cd51d" path="/var/lib/kubelet/pods/79638794-d019-4140-abfd-f688211cd51d/volumes" Oct 08 08:11:28 crc kubenswrapper[4958]: I1008 08:11:28.100789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0aacf99-afe9-4f82-aabe-91eca23794e2","Type":"ContainerStarted","Data":"2c6a7aacf18e7ba7054c3a03d5972b3c2948475d5479751524393553a38dcd61"} Oct 08 08:11:28 crc kubenswrapper[4958]: I1008 08:11:28.101220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0aacf99-afe9-4f82-aabe-91eca23794e2","Type":"ContainerStarted","Data":"31c9ca387398f8c2a40a08f580e5403a1a5d4563a804b86bd72af695d32b62d7"} Oct 08 08:11:29 crc kubenswrapper[4958]: I1008 08:11:29.119593 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e0aacf99-afe9-4f82-aabe-91eca23794e2","Type":"ContainerStarted","Data":"152a39a7380ab138cd450f1cf25a7bc1a03baaad6eeea5d4b23ad3a94c254ab8"} Oct 08 08:11:29 crc kubenswrapper[4958]: I1008 08:11:29.152116 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.152091582 podStartE2EDuration="3.152091582s" podCreationTimestamp="2025-10-08 08:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:11:29.145604436 +0000 UTC m=+5832.275297117" watchObservedRunningTime="2025-10-08 08:11:29.152091582 +0000 UTC m=+5832.281784223" Oct 08 08:11:31 crc kubenswrapper[4958]: I1008 08:11:31.450162 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 08 08:11:31 crc kubenswrapper[4958]: I1008 08:11:31.828191 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 08 08:11:37 crc kubenswrapper[4958]: I1008 08:11:37.072856 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 08 08:11:39 crc kubenswrapper[4958]: I1008 08:11:39.392917 4958 scope.go:117] "RemoveContainer" containerID="05ec61d6b5c4627d4045dc58c3aad92012b4d497011c607bfb3612a4d2d20fb9" Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.657364 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2bw6q"] Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.659454 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.671037 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2bw6q"] Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.762192 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5wf\" (UniqueName: \"kubernetes.io/projected/67f8f08a-b162-4b26-8cc2-3102a81310bf-kube-api-access-gd5wf\") pod \"glance-db-create-2bw6q\" (UID: \"67f8f08a-b162-4b26-8cc2-3102a81310bf\") " pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.863243 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5wf\" (UniqueName: \"kubernetes.io/projected/67f8f08a-b162-4b26-8cc2-3102a81310bf-kube-api-access-gd5wf\") pod \"glance-db-create-2bw6q\" (UID: \"67f8f08a-b162-4b26-8cc2-3102a81310bf\") " pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.890696 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5wf\" (UniqueName: \"kubernetes.io/projected/67f8f08a-b162-4b26-8cc2-3102a81310bf-kube-api-access-gd5wf\") pod \"glance-db-create-2bw6q\" (UID: \"67f8f08a-b162-4b26-8cc2-3102a81310bf\") " pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:40 crc kubenswrapper[4958]: I1008 08:11:40.981978 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:41 crc kubenswrapper[4958]: I1008 08:11:41.516176 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2bw6q"] Oct 08 08:11:42 crc kubenswrapper[4958]: I1008 08:11:42.265796 4958 generic.go:334] "Generic (PLEG): container finished" podID="67f8f08a-b162-4b26-8cc2-3102a81310bf" containerID="3d1730d7f164127e1f402fc206425c582aae62bf93d5f72c1a9bbfba6a3b8d00" exitCode=0 Oct 08 08:11:42 crc kubenswrapper[4958]: I1008 08:11:42.265925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2bw6q" event={"ID":"67f8f08a-b162-4b26-8cc2-3102a81310bf","Type":"ContainerDied","Data":"3d1730d7f164127e1f402fc206425c582aae62bf93d5f72c1a9bbfba6a3b8d00"} Oct 08 08:11:42 crc kubenswrapper[4958]: I1008 08:11:42.266245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2bw6q" event={"ID":"67f8f08a-b162-4b26-8cc2-3102a81310bf","Type":"ContainerStarted","Data":"c8b2a267b2a741b1cd31f0171eaa7b25fc7791db700302f4ba4a72940e3058cb"} Oct 08 08:11:43 crc kubenswrapper[4958]: I1008 08:11:43.724294 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:43 crc kubenswrapper[4958]: I1008 08:11:43.845735 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd5wf\" (UniqueName: \"kubernetes.io/projected/67f8f08a-b162-4b26-8cc2-3102a81310bf-kube-api-access-gd5wf\") pod \"67f8f08a-b162-4b26-8cc2-3102a81310bf\" (UID: \"67f8f08a-b162-4b26-8cc2-3102a81310bf\") " Oct 08 08:11:43 crc kubenswrapper[4958]: I1008 08:11:43.853125 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f8f08a-b162-4b26-8cc2-3102a81310bf-kube-api-access-gd5wf" (OuterVolumeSpecName: "kube-api-access-gd5wf") pod "67f8f08a-b162-4b26-8cc2-3102a81310bf" (UID: "67f8f08a-b162-4b26-8cc2-3102a81310bf"). InnerVolumeSpecName "kube-api-access-gd5wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:11:43 crc kubenswrapper[4958]: I1008 08:11:43.947973 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd5wf\" (UniqueName: \"kubernetes.io/projected/67f8f08a-b162-4b26-8cc2-3102a81310bf-kube-api-access-gd5wf\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:44 crc kubenswrapper[4958]: I1008 08:11:44.307458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2bw6q" event={"ID":"67f8f08a-b162-4b26-8cc2-3102a81310bf","Type":"ContainerDied","Data":"c8b2a267b2a741b1cd31f0171eaa7b25fc7791db700302f4ba4a72940e3058cb"} Oct 08 08:11:44 crc kubenswrapper[4958]: I1008 08:11:44.307505 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b2a267b2a741b1cd31f0171eaa7b25fc7791db700302f4ba4a72940e3058cb" Oct 08 08:11:44 crc kubenswrapper[4958]: I1008 08:11:44.307529 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2bw6q" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.770239 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-acfa-account-create-7m5qn"] Oct 08 08:11:50 crc kubenswrapper[4958]: E1008 08:11:50.771533 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f8f08a-b162-4b26-8cc2-3102a81310bf" containerName="mariadb-database-create" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.771560 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f8f08a-b162-4b26-8cc2-3102a81310bf" containerName="mariadb-database-create" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.771885 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f8f08a-b162-4b26-8cc2-3102a81310bf" containerName="mariadb-database-create" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.773298 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.777539 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.798343 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-acfa-account-create-7m5qn"] Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.887283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99bk\" (UniqueName: \"kubernetes.io/projected/17c14b79-c6c8-4d47-a7ac-cb45766955c0-kube-api-access-w99bk\") pod \"glance-acfa-account-create-7m5qn\" (UID: \"17c14b79-c6c8-4d47-a7ac-cb45766955c0\") " pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:50 crc kubenswrapper[4958]: I1008 08:11:50.989027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w99bk\" (UniqueName: \"kubernetes.io/projected/17c14b79-c6c8-4d47-a7ac-cb45766955c0-kube-api-access-w99bk\") pod \"glance-acfa-account-create-7m5qn\" (UID: \"17c14b79-c6c8-4d47-a7ac-cb45766955c0\") " pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:51 crc kubenswrapper[4958]: I1008 08:11:51.026708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w99bk\" (UniqueName: \"kubernetes.io/projected/17c14b79-c6c8-4d47-a7ac-cb45766955c0-kube-api-access-w99bk\") pod \"glance-acfa-account-create-7m5qn\" (UID: \"17c14b79-c6c8-4d47-a7ac-cb45766955c0\") " pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:51 crc kubenswrapper[4958]: I1008 08:11:51.116510 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:51 crc kubenswrapper[4958]: I1008 08:11:51.642572 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-acfa-account-create-7m5qn"] Oct 08 08:11:52 crc kubenswrapper[4958]: I1008 08:11:52.405040 4958 generic.go:334] "Generic (PLEG): container finished" podID="17c14b79-c6c8-4d47-a7ac-cb45766955c0" containerID="4d9eb497cd07b6efaef4ab492e1d246cb89a1edda44392542bb5dc019bdea167" exitCode=0 Oct 08 08:11:52 crc kubenswrapper[4958]: I1008 08:11:52.405138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acfa-account-create-7m5qn" event={"ID":"17c14b79-c6c8-4d47-a7ac-cb45766955c0","Type":"ContainerDied","Data":"4d9eb497cd07b6efaef4ab492e1d246cb89a1edda44392542bb5dc019bdea167"} Oct 08 08:11:52 crc kubenswrapper[4958]: I1008 08:11:52.405409 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acfa-account-create-7m5qn" event={"ID":"17c14b79-c6c8-4d47-a7ac-cb45766955c0","Type":"ContainerStarted","Data":"68358c93a1c3037f030d57372063c9534a42a25371256dc903ed40892d52f7f2"} Oct 08 08:11:53 crc kubenswrapper[4958]: I1008 08:11:53.889407 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:53 crc kubenswrapper[4958]: I1008 08:11:53.943743 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w99bk\" (UniqueName: \"kubernetes.io/projected/17c14b79-c6c8-4d47-a7ac-cb45766955c0-kube-api-access-w99bk\") pod \"17c14b79-c6c8-4d47-a7ac-cb45766955c0\" (UID: \"17c14b79-c6c8-4d47-a7ac-cb45766955c0\") " Oct 08 08:11:53 crc kubenswrapper[4958]: I1008 08:11:53.978766 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c14b79-c6c8-4d47-a7ac-cb45766955c0-kube-api-access-w99bk" (OuterVolumeSpecName: "kube-api-access-w99bk") pod "17c14b79-c6c8-4d47-a7ac-cb45766955c0" (UID: "17c14b79-c6c8-4d47-a7ac-cb45766955c0"). InnerVolumeSpecName "kube-api-access-w99bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:11:54 crc kubenswrapper[4958]: I1008 08:11:54.046191 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w99bk\" (UniqueName: \"kubernetes.io/projected/17c14b79-c6c8-4d47-a7ac-cb45766955c0-kube-api-access-w99bk\") on node \"crc\" DevicePath \"\"" Oct 08 08:11:54 crc kubenswrapper[4958]: I1008 08:11:54.426609 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acfa-account-create-7m5qn" event={"ID":"17c14b79-c6c8-4d47-a7ac-cb45766955c0","Type":"ContainerDied","Data":"68358c93a1c3037f030d57372063c9534a42a25371256dc903ed40892d52f7f2"} Oct 08 08:11:54 crc kubenswrapper[4958]: I1008 08:11:54.426667 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68358c93a1c3037f030d57372063c9534a42a25371256dc903ed40892d52f7f2" Oct 08 08:11:54 crc kubenswrapper[4958]: I1008 08:11:54.426645 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acfa-account-create-7m5qn" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.848040 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rbgl9"] Oct 08 08:11:55 crc kubenswrapper[4958]: E1008 08:11:55.848971 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c14b79-c6c8-4d47-a7ac-cb45766955c0" containerName="mariadb-account-create" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.848988 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c14b79-c6c8-4d47-a7ac-cb45766955c0" containerName="mariadb-account-create" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.849231 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c14b79-c6c8-4d47-a7ac-cb45766955c0" containerName="mariadb-account-create" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.849993 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.852875 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-drzbv" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.853035 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.856972 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rbgl9"] Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.887225 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-db-sync-config-data\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.887279 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-combined-ca-bundle\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.887473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46qf\" (UniqueName: \"kubernetes.io/projected/03647698-5b8c-4794-a7e4-62812753afd2-kube-api-access-t46qf\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.887657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-config-data\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.988641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-db-sync-config-data\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.988691 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-combined-ca-bundle\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.988755 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t46qf\" (UniqueName: \"kubernetes.io/projected/03647698-5b8c-4794-a7e4-62812753afd2-kube-api-access-t46qf\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.988806 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-config-data\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.995368 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-db-sync-config-data\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.995469 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-combined-ca-bundle\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:55 crc kubenswrapper[4958]: I1008 08:11:55.995674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-config-data\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:56 crc kubenswrapper[4958]: I1008 08:11:56.027443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t46qf\" (UniqueName: \"kubernetes.io/projected/03647698-5b8c-4794-a7e4-62812753afd2-kube-api-access-t46qf\") pod \"glance-db-sync-rbgl9\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:56 crc kubenswrapper[4958]: I1008 08:11:56.180516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rbgl9" Oct 08 08:11:56 crc kubenswrapper[4958]: I1008 08:11:56.780759 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rbgl9"] Oct 08 08:11:56 crc kubenswrapper[4958]: W1008 08:11:56.788069 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03647698_5b8c_4794_a7e4_62812753afd2.slice/crio-d03bcdb375bac983464c79e80ba683b9a42fcf10bc2c6289e9bea236b7f88195 WatchSource:0}: Error finding container d03bcdb375bac983464c79e80ba683b9a42fcf10bc2c6289e9bea236b7f88195: Status 404 returned error can't find the container with id d03bcdb375bac983464c79e80ba683b9a42fcf10bc2c6289e9bea236b7f88195 Oct 08 08:11:57 crc kubenswrapper[4958]: I1008 08:11:57.461967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rbgl9" event={"ID":"03647698-5b8c-4794-a7e4-62812753afd2","Type":"ContainerStarted","Data":"d03bcdb375bac983464c79e80ba683b9a42fcf10bc2c6289e9bea236b7f88195"} Oct 08 08:11:58 crc kubenswrapper[4958]: I1008 08:11:58.471979 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rbgl9" event={"ID":"03647698-5b8c-4794-a7e4-62812753afd2","Type":"ContainerStarted","Data":"9159fca489b7bdce707a6935f4e0a36140a52121b3b73df5ce9001efdf6c04c1"} Oct 08 08:12:01 crc kubenswrapper[4958]: I1008 08:12:01.510214 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rbgl9" event={"ID":"03647698-5b8c-4794-a7e4-62812753afd2","Type":"ContainerDied","Data":"9159fca489b7bdce707a6935f4e0a36140a52121b3b73df5ce9001efdf6c04c1"} Oct 08 08:12:01 crc kubenswrapper[4958]: I1008 08:12:01.510312 4958 generic.go:334] "Generic (PLEG): container finished" podID="03647698-5b8c-4794-a7e4-62812753afd2" containerID="9159fca489b7bdce707a6935f4e0a36140a52121b3b73df5ce9001efdf6c04c1" exitCode=0 Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.057916 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rbgl9" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.133208 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-db-sync-config-data\") pod \"03647698-5b8c-4794-a7e4-62812753afd2\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.133265 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-config-data\") pod \"03647698-5b8c-4794-a7e4-62812753afd2\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.133305 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-combined-ca-bundle\") pod \"03647698-5b8c-4794-a7e4-62812753afd2\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.133336 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t46qf\" (UniqueName: \"kubernetes.io/projected/03647698-5b8c-4794-a7e4-62812753afd2-kube-api-access-t46qf\") pod \"03647698-5b8c-4794-a7e4-62812753afd2\" (UID: \"03647698-5b8c-4794-a7e4-62812753afd2\") " Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.140587 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03647698-5b8c-4794-a7e4-62812753afd2-kube-api-access-t46qf" (OuterVolumeSpecName: "kube-api-access-t46qf") pod "03647698-5b8c-4794-a7e4-62812753afd2" (UID: "03647698-5b8c-4794-a7e4-62812753afd2"). InnerVolumeSpecName "kube-api-access-t46qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.142681 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "03647698-5b8c-4794-a7e4-62812753afd2" (UID: "03647698-5b8c-4794-a7e4-62812753afd2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.167527 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03647698-5b8c-4794-a7e4-62812753afd2" (UID: "03647698-5b8c-4794-a7e4-62812753afd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.213303 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-config-data" (OuterVolumeSpecName: "config-data") pod "03647698-5b8c-4794-a7e4-62812753afd2" (UID: "03647698-5b8c-4794-a7e4-62812753afd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.235405 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.235524 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.235596 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03647698-5b8c-4794-a7e4-62812753afd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.235648 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t46qf\" (UniqueName: \"kubernetes.io/projected/03647698-5b8c-4794-a7e4-62812753afd2-kube-api-access-t46qf\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.542896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rbgl9" event={"ID":"03647698-5b8c-4794-a7e4-62812753afd2","Type":"ContainerDied","Data":"d03bcdb375bac983464c79e80ba683b9a42fcf10bc2c6289e9bea236b7f88195"} Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.542998 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d03bcdb375bac983464c79e80ba683b9a42fcf10bc2c6289e9bea236b7f88195" Oct 08 08:12:03 crc kubenswrapper[4958]: I1008 08:12:03.543026 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rbgl9" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.004223 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7855b8cc9-txhs2"] Oct 08 08:12:04 crc kubenswrapper[4958]: E1008 08:12:04.005402 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03647698-5b8c-4794-a7e4-62812753afd2" containerName="glance-db-sync" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.005433 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="03647698-5b8c-4794-a7e4-62812753afd2" containerName="glance-db-sync" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.005758 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="03647698-5b8c-4794-a7e4-62812753afd2" containerName="glance-db-sync" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.007217 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.027368 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7855b8cc9-txhs2"] Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.050032 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.051594 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.052546 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68h8\" (UniqueName: \"kubernetes.io/projected/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-kube-api-access-l68h8\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.052585 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-config\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.052642 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-dns-svc\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.052704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-nb\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.052743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-sb\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.056344 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-drzbv" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.056506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.056620 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.079149 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.143429 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.144732 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.148373 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.151217 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.153911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-nb\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.153980 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-sb\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.154024 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68h8\" (UniqueName: \"kubernetes.io/projected/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-kube-api-access-l68h8\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.154051 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-config\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.154109 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-dns-svc\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.154931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-dns-svc\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.155089 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-sb\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.155635 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-nb\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.155705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-config\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.203060 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68h8\" (UniqueName: \"kubernetes.io/projected/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-kube-api-access-l68h8\") pod \"dnsmasq-dns-7855b8cc9-txhs2\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256048 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9rf\" (UniqueName: \"kubernetes.io/projected/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-kube-api-access-jb9rf\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256091 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256120 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqts\" (UniqueName: \"kubernetes.io/projected/ecf1ce59-75d0-426e-a80f-b5a7ba706472-kube-api-access-4mqts\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256147 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256195 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256210 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256318 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256494 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256532 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-logs\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256558 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.256637 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-logs\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.332112 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.357680 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-logs\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.357909 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9rf\" (UniqueName: \"kubernetes.io/projected/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-kube-api-access-jb9rf\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358004 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358118 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqts\" (UniqueName: \"kubernetes.io/projected/ecf1ce59-75d0-426e-a80f-b5a7ba706472-kube-api-access-4mqts\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358341 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-logs\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358310 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-logs\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.358607 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.359825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-logs\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.360074 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.361604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-scripts\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.361865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.364300 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.369914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-config-data\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.372317 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.373533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.377818 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9rf\" (UniqueName: \"kubernetes.io/projected/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-kube-api-access-jb9rf\") pod \"glance-default-internal-api-0\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.378917 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqts\" (UniqueName: \"kubernetes.io/projected/ecf1ce59-75d0-426e-a80f-b5a7ba706472-kube-api-access-4mqts\") pod \"glance-default-external-api-0\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.484290 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.641209 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7855b8cc9-txhs2"] Oct 08 08:12:04 crc kubenswrapper[4958]: I1008 08:12:04.675059 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.049936 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.222881 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:05 crc kubenswrapper[4958]: W1008 08:12:05.261433 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecf1ce59_75d0_426e_a80f_b5a7ba706472.slice/crio-03c71f3d100c6f4f12e400ab17b8bb3241fdcc0c66b9c5217046989e3487fc1a WatchSource:0}: Error finding container 03c71f3d100c6f4f12e400ab17b8bb3241fdcc0c66b9c5217046989e3487fc1a: Status 404 returned error can't find the container with id 03c71f3d100c6f4f12e400ab17b8bb3241fdcc0c66b9c5217046989e3487fc1a Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.265075 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.569575 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0e91c7a4-1b02-48b6-b59f-7c9c779a472f","Type":"ContainerStarted","Data":"de18ca0393dddf01be62f02291138c24e2dee18135558cca90f35beb03896f7b"} Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.570663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf1ce59-75d0-426e-a80f-b5a7ba706472","Type":"ContainerStarted","Data":"03c71f3d100c6f4f12e400ab17b8bb3241fdcc0c66b9c5217046989e3487fc1a"} Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.571744 4958 generic.go:334] "Generic (PLEG): container finished" podID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerID="84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe" exitCode=0 Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.571777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" event={"ID":"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5","Type":"ContainerDied","Data":"84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe"} Oct 08 08:12:05 crc kubenswrapper[4958]: I1008 08:12:05.571798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" event={"ID":"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5","Type":"ContainerStarted","Data":"87445be327c42f89b067a2295dad53e63326db65d10cbf0b6bf6cd9a721dea85"} Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.220163 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.582039 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0e91c7a4-1b02-48b6-b59f-7c9c779a472f","Type":"ContainerStarted","Data":"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f"} Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.582312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0e91c7a4-1b02-48b6-b59f-7c9c779a472f","Type":"ContainerStarted","Data":"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d"} Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.584228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf1ce59-75d0-426e-a80f-b5a7ba706472","Type":"ContainerStarted","Data":"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1"} Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.584262 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf1ce59-75d0-426e-a80f-b5a7ba706472","Type":"ContainerStarted","Data":"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a"} Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.584317 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-log" containerID="cri-o://e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a" gracePeriod=30 Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.584583 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-httpd" containerID="cri-o://22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1" gracePeriod=30 Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.586476 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" event={"ID":"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5","Type":"ContainerStarted","Data":"d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8"} Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.586970 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.604834 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.604815474 podStartE2EDuration="2.604815474s" podCreationTimestamp="2025-10-08 08:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:06.600583659 +0000 UTC m=+5869.730276260" watchObservedRunningTime="2025-10-08 08:12:06.604815474 +0000 UTC m=+5869.734508075" Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.642612 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" podStartSLOduration=3.642596618 podStartE2EDuration="3.642596618s" podCreationTimestamp="2025-10-08 08:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:06.641610991 +0000 UTC m=+5869.771303612" watchObservedRunningTime="2025-10-08 08:12:06.642596618 +0000 UTC m=+5869.772289219" Oct 08 08:12:06 crc kubenswrapper[4958]: I1008 08:12:06.646656 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.646648487 podStartE2EDuration="3.646648487s" podCreationTimestamp="2025-10-08 08:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:06.626417069 +0000 UTC m=+5869.756109670" watchObservedRunningTime="2025-10-08 08:12:06.646648487 +0000 UTC m=+5869.776341088" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.223384 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.314734 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-logs\") pod \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.314887 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-combined-ca-bundle\") pod \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.314923 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-config-data\") pod \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.314987 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-httpd-run\") pod \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.315026 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mqts\" (UniqueName: \"kubernetes.io/projected/ecf1ce59-75d0-426e-a80f-b5a7ba706472-kube-api-access-4mqts\") pod \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.315056 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-scripts\") pod \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\" (UID: \"ecf1ce59-75d0-426e-a80f-b5a7ba706472\") " Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.315591 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-logs" (OuterVolumeSpecName: "logs") pod "ecf1ce59-75d0-426e-a80f-b5a7ba706472" (UID: "ecf1ce59-75d0-426e-a80f-b5a7ba706472"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.315612 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ecf1ce59-75d0-426e-a80f-b5a7ba706472" (UID: "ecf1ce59-75d0-426e-a80f-b5a7ba706472"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.344008 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf1ce59-75d0-426e-a80f-b5a7ba706472-kube-api-access-4mqts" (OuterVolumeSpecName: "kube-api-access-4mqts") pod "ecf1ce59-75d0-426e-a80f-b5a7ba706472" (UID: "ecf1ce59-75d0-426e-a80f-b5a7ba706472"). InnerVolumeSpecName "kube-api-access-4mqts". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.346167 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-scripts" (OuterVolumeSpecName: "scripts") pod "ecf1ce59-75d0-426e-a80f-b5a7ba706472" (UID: "ecf1ce59-75d0-426e-a80f-b5a7ba706472"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.371221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecf1ce59-75d0-426e-a80f-b5a7ba706472" (UID: "ecf1ce59-75d0-426e-a80f-b5a7ba706472"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.392622 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-config-data" (OuterVolumeSpecName: "config-data") pod "ecf1ce59-75d0-426e-a80f-b5a7ba706472" (UID: "ecf1ce59-75d0-426e-a80f-b5a7ba706472"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.417305 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.417555 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.417712 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.417828 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ce59-75d0-426e-a80f-b5a7ba706472-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.417943 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mqts\" (UniqueName: \"kubernetes.io/projected/ecf1ce59-75d0-426e-a80f-b5a7ba706472-kube-api-access-4mqts\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.418093 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ce59-75d0-426e-a80f-b5a7ba706472-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.619888 4958 generic.go:334] "Generic (PLEG): container finished" podID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerID="22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1" exitCode=0 Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.619927 4958 generic.go:334] "Generic (PLEG): container finished" podID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerID="e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a" exitCode=143 Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.620132 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-log" containerID="cri-o://311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d" gracePeriod=30 Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.620221 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.620871 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf1ce59-75d0-426e-a80f-b5a7ba706472","Type":"ContainerDied","Data":"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1"} Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.620894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf1ce59-75d0-426e-a80f-b5a7ba706472","Type":"ContainerDied","Data":"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a"} Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.620903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ecf1ce59-75d0-426e-a80f-b5a7ba706472","Type":"ContainerDied","Data":"03c71f3d100c6f4f12e400ab17b8bb3241fdcc0c66b9c5217046989e3487fc1a"} Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.620917 4958 scope.go:117] "RemoveContainer" containerID="22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.621692 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-httpd" containerID="cri-o://6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f" gracePeriod=30 Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.653465 4958 scope.go:117] "RemoveContainer" containerID="e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.655878 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.667370 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.676237 4958 scope.go:117] "RemoveContainer" containerID="22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1" Oct 08 08:12:07 crc kubenswrapper[4958]: E1008 08:12:07.676612 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1\": container with ID starting with 22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1 not found: ID does not exist" containerID="22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.676644 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1"} err="failed to get container status \"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1\": rpc error: code = NotFound desc = could not find container \"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1\": container with ID starting with 22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1 not found: ID does not exist" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.676680 4958 scope.go:117] "RemoveContainer" containerID="e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a" Oct 08 08:12:07 crc kubenswrapper[4958]: E1008 08:12:07.677078 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a\": container with ID starting with e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a not found: ID does not exist" containerID="e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.677103 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a"} err="failed to get container status \"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a\": rpc error: code = NotFound desc = could not find container \"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a\": container with ID starting with e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a not found: ID does not exist" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.677116 4958 scope.go:117] "RemoveContainer" containerID="22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.677849 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1"} err="failed to get container status \"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1\": rpc error: code = NotFound desc = could not find container \"22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1\": container with ID starting with 22225e98a9165b06b193d2d90f2bfc2bec6fed498d128612a76b7c49279416b1 not found: ID does not exist" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.677875 4958 scope.go:117] "RemoveContainer" containerID="e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.678241 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a"} err="failed to get container status \"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a\": rpc error: code = NotFound desc = could not find container \"e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a\": container with ID starting with e810a0fc0b71836de19742a1cb7cdca64a49263fa645dc0ea31999f05bfbc14a not found: ID does not exist" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.681566 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:07 crc kubenswrapper[4958]: E1008 08:12:07.682037 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-log" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.682053 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-log" Oct 08 08:12:07 crc kubenswrapper[4958]: E1008 08:12:07.682065 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-httpd" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.682072 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-httpd" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.682224 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-log" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.682251 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" containerName="glance-httpd" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.683299 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.686352 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.686547 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.695145 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.827499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-logs\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.827866 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.827935 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.828106 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-scripts\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.828249 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fgd\" (UniqueName: \"kubernetes.io/projected/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-kube-api-access-86fgd\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.828457 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.828622 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-config-data\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.931116 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.931246 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-config-data\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.931323 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-logs\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.931376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.931405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.931737 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.932004 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-logs\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.932070 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-scripts\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.932128 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fgd\" (UniqueName: \"kubernetes.io/projected/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-kube-api-access-86fgd\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.937226 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.937468 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-scripts\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.938301 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.939909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-config-data\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:07 crc kubenswrapper[4958]: I1008 08:12:07.949461 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fgd\" (UniqueName: \"kubernetes.io/projected/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-kube-api-access-86fgd\") pod \"glance-default-external-api-0\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " pod="openstack/glance-default-external-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.001741 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.304334 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.442057 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-combined-ca-bundle\") pod \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.442140 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-config-data\") pod \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.442224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-logs\") pod \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.442345 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-httpd-run\") pod \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.442365 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-scripts\") pod \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.442393 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9rf\" (UniqueName: \"kubernetes.io/projected/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-kube-api-access-jb9rf\") pod \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\" (UID: \"0e91c7a4-1b02-48b6-b59f-7c9c779a472f\") " Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.443303 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-logs" (OuterVolumeSpecName: "logs") pod "0e91c7a4-1b02-48b6-b59f-7c9c779a472f" (UID: "0e91c7a4-1b02-48b6-b59f-7c9c779a472f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.444199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0e91c7a4-1b02-48b6-b59f-7c9c779a472f" (UID: "0e91c7a4-1b02-48b6-b59f-7c9c779a472f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.453777 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-scripts" (OuterVolumeSpecName: "scripts") pod "0e91c7a4-1b02-48b6-b59f-7c9c779a472f" (UID: "0e91c7a4-1b02-48b6-b59f-7c9c779a472f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.456171 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-kube-api-access-jb9rf" (OuterVolumeSpecName: "kube-api-access-jb9rf") pod "0e91c7a4-1b02-48b6-b59f-7c9c779a472f" (UID: "0e91c7a4-1b02-48b6-b59f-7c9c779a472f"). InnerVolumeSpecName "kube-api-access-jb9rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.523212 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e91c7a4-1b02-48b6-b59f-7c9c779a472f" (UID: "0e91c7a4-1b02-48b6-b59f-7c9c779a472f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.545018 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.545074 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.545084 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9rf\" (UniqueName: \"kubernetes.io/projected/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-kube-api-access-jb9rf\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.545095 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.545104 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.554691 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-config-data" (OuterVolumeSpecName: "config-data") pod "0e91c7a4-1b02-48b6-b59f-7c9c779a472f" (UID: "0e91c7a4-1b02-48b6-b59f-7c9c779a472f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.620248 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:12:08 crc kubenswrapper[4958]: W1008 08:12:08.622648 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb43f2f2a_3d0f_47c5_ae82_f41c92f92940.slice/crio-49c967394929e2255450b0e84bfcde51c361b79915cf0f9cefbc2b1b75b1be9e WatchSource:0}: Error finding container 49c967394929e2255450b0e84bfcde51c361b79915cf0f9cefbc2b1b75b1be9e: Status 404 returned error can't find the container with id 49c967394929e2255450b0e84bfcde51c361b79915cf0f9cefbc2b1b75b1be9e Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632151 4958 generic.go:334] "Generic (PLEG): container finished" podID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerID="6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f" exitCode=0 Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632366 4958 generic.go:334] "Generic (PLEG): container finished" podID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerID="311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d" exitCode=143 Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632452 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0e91c7a4-1b02-48b6-b59f-7c9c779a472f","Type":"ContainerDied","Data":"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f"} Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632490 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0e91c7a4-1b02-48b6-b59f-7c9c779a472f","Type":"ContainerDied","Data":"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d"} Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632509 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0e91c7a4-1b02-48b6-b59f-7c9c779a472f","Type":"ContainerDied","Data":"de18ca0393dddf01be62f02291138c24e2dee18135558cca90f35beb03896f7b"} Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632540 4958 scope.go:117] "RemoveContainer" containerID="6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.632734 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.648664 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e91c7a4-1b02-48b6-b59f-7c9c779a472f-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.654183 4958 scope.go:117] "RemoveContainer" containerID="311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.673557 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.686318 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.688315 4958 scope.go:117] "RemoveContainer" containerID="6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f" Oct 08 08:12:08 crc kubenswrapper[4958]: E1008 08:12:08.689587 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f\": container with ID starting with 6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f not found: ID does not exist" containerID="6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.689620 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f"} err="failed to get container status \"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f\": rpc error: code = NotFound desc = could not find container \"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f\": container with ID starting with 6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f not found: ID does not exist" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.689645 4958 scope.go:117] "RemoveContainer" containerID="311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d" Oct 08 08:12:08 crc kubenswrapper[4958]: E1008 08:12:08.690011 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d\": container with ID starting with 311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d not found: ID does not exist" containerID="311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.690034 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d"} err="failed to get container status \"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d\": rpc error: code = NotFound desc = could not find container \"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d\": container with ID starting with 311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d not found: ID does not exist" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.690046 4958 scope.go:117] "RemoveContainer" containerID="6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.690261 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f"} err="failed to get container status \"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f\": rpc error: code = NotFound desc = could not find container \"6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f\": container with ID starting with 6749775151bc9b56fe764223b851f2cc63067d73afba03184d1e3a95acaead4f not found: ID does not exist" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.690278 4958 scope.go:117] "RemoveContainer" containerID="311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.690440 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d"} err="failed to get container status \"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d\": rpc error: code = NotFound desc = could not find container \"311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d\": container with ID starting with 311b2450ea7000c479effed661311b5110b68edc74a902186b374d3efba7526d not found: ID does not exist" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.702215 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:08 crc kubenswrapper[4958]: E1008 08:12:08.702684 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-log" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.702702 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-log" Oct 08 08:12:08 crc kubenswrapper[4958]: E1008 08:12:08.702721 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-httpd" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.702728 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-httpd" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.702898 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-log" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.712868 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" containerName="glance-httpd" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.716299 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.716492 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.718805 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.718931 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852357 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852416 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22p6\" (UniqueName: \"kubernetes.io/projected/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-kube-api-access-m22p6\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852487 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852518 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-logs\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852579 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.852613 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-logs\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954161 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954206 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954285 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m22p6\" (UniqueName: \"kubernetes.io/projected/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-kube-api-access-m22p6\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954332 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.954585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-logs\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.955562 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.958321 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.958679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.958760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.959861 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:08 crc kubenswrapper[4958]: I1008 08:12:08.971602 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m22p6\" (UniqueName: \"kubernetes.io/projected/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-kube-api-access-m22p6\") pod \"glance-default-internal-api-0\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.046245 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:09 crc kubenswrapper[4958]: W1008 08:12:09.600729 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4388f0e1_bbd7_4dbf_bf83_9fa4d209702a.slice/crio-361a63ddc5281589d7cda2ad8a00cc936b1059be4d3c09bb730eb4fbbf0e2c9b WatchSource:0}: Error finding container 361a63ddc5281589d7cda2ad8a00cc936b1059be4d3c09bb730eb4fbbf0e2c9b: Status 404 returned error can't find the container with id 361a63ddc5281589d7cda2ad8a00cc936b1059be4d3c09bb730eb4fbbf0e2c9b Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.616176 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e91c7a4-1b02-48b6-b59f-7c9c779a472f" path="/var/lib/kubelet/pods/0e91c7a4-1b02-48b6-b59f-7c9c779a472f/volumes" Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.617114 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf1ce59-75d0-426e-a80f-b5a7ba706472" path="/var/lib/kubelet/pods/ecf1ce59-75d0-426e-a80f-b5a7ba706472/volumes" Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.618000 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.654918 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b43f2f2a-3d0f-47c5-ae82-f41c92f92940","Type":"ContainerStarted","Data":"37e535dfc52bf6b506a278b1310e2acabb4d573da202348a5b666a21c4b1e3fb"} Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.654974 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b43f2f2a-3d0f-47c5-ae82-f41c92f92940","Type":"ContainerStarted","Data":"49c967394929e2255450b0e84bfcde51c361b79915cf0f9cefbc2b1b75b1be9e"} Oct 08 08:12:09 crc kubenswrapper[4958]: I1008 08:12:09.656384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a","Type":"ContainerStarted","Data":"361a63ddc5281589d7cda2ad8a00cc936b1059be4d3c09bb730eb4fbbf0e2c9b"} Oct 08 08:12:10 crc kubenswrapper[4958]: I1008 08:12:10.668253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b43f2f2a-3d0f-47c5-ae82-f41c92f92940","Type":"ContainerStarted","Data":"2a2fa833c7b91b59cdec46ffb28f02c165e7e45d3df20547f34a00275ab8885c"} Oct 08 08:12:10 crc kubenswrapper[4958]: I1008 08:12:10.670983 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a","Type":"ContainerStarted","Data":"0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c"} Oct 08 08:12:10 crc kubenswrapper[4958]: I1008 08:12:10.694514 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.694491734 podStartE2EDuration="3.694491734s" podCreationTimestamp="2025-10-08 08:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:10.686622571 +0000 UTC m=+5873.816315172" watchObservedRunningTime="2025-10-08 08:12:10.694491734 +0000 UTC m=+5873.824184345" Oct 08 08:12:11 crc kubenswrapper[4958]: I1008 08:12:11.694680 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a","Type":"ContainerStarted","Data":"42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401"} Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.334242 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.371463 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.371432121 podStartE2EDuration="6.371432121s" podCreationTimestamp="2025-10-08 08:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:11.737242381 +0000 UTC m=+5874.866935052" watchObservedRunningTime="2025-10-08 08:12:14.371432121 +0000 UTC m=+5877.501124752" Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.435078 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dfb96c8bc-nlv9l"] Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.435440 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerName="dnsmasq-dns" containerID="cri-o://f4b1d693a8d3400b519696978093c97e505eae4a114b70c0ac7bde3e3a0375b0" gracePeriod=10 Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.727597 4958 generic.go:334] "Generic (PLEG): container finished" podID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerID="f4b1d693a8d3400b519696978093c97e505eae4a114b70c0ac7bde3e3a0375b0" exitCode=0 Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.727817 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" event={"ID":"ec426777-7768-4cce-abd8-2116ed2fb2e4","Type":"ContainerDied","Data":"f4b1d693a8d3400b519696978093c97e505eae4a114b70c0ac7bde3e3a0375b0"} Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.916769 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.986097 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-config\") pod \"ec426777-7768-4cce-abd8-2116ed2fb2e4\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.986360 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-sb\") pod \"ec426777-7768-4cce-abd8-2116ed2fb2e4\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.986470 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-nb\") pod \"ec426777-7768-4cce-abd8-2116ed2fb2e4\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.986511 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlkhs\" (UniqueName: \"kubernetes.io/projected/ec426777-7768-4cce-abd8-2116ed2fb2e4-kube-api-access-hlkhs\") pod \"ec426777-7768-4cce-abd8-2116ed2fb2e4\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.986591 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-dns-svc\") pod \"ec426777-7768-4cce-abd8-2116ed2fb2e4\" (UID: \"ec426777-7768-4cce-abd8-2116ed2fb2e4\") " Oct 08 08:12:14 crc kubenswrapper[4958]: I1008 08:12:14.991477 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec426777-7768-4cce-abd8-2116ed2fb2e4-kube-api-access-hlkhs" (OuterVolumeSpecName: "kube-api-access-hlkhs") pod "ec426777-7768-4cce-abd8-2116ed2fb2e4" (UID: "ec426777-7768-4cce-abd8-2116ed2fb2e4"). InnerVolumeSpecName "kube-api-access-hlkhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.029147 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec426777-7768-4cce-abd8-2116ed2fb2e4" (UID: "ec426777-7768-4cce-abd8-2116ed2fb2e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.036621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-config" (OuterVolumeSpecName: "config") pod "ec426777-7768-4cce-abd8-2116ed2fb2e4" (UID: "ec426777-7768-4cce-abd8-2116ed2fb2e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.043401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec426777-7768-4cce-abd8-2116ed2fb2e4" (UID: "ec426777-7768-4cce-abd8-2116ed2fb2e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.060804 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec426777-7768-4cce-abd8-2116ed2fb2e4" (UID: "ec426777-7768-4cce-abd8-2116ed2fb2e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.088631 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.088660 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.088669 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.088679 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlkhs\" (UniqueName: \"kubernetes.io/projected/ec426777-7768-4cce-abd8-2116ed2fb2e4-kube-api-access-hlkhs\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.088690 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec426777-7768-4cce-abd8-2116ed2fb2e4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.737731 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" event={"ID":"ec426777-7768-4cce-abd8-2116ed2fb2e4","Type":"ContainerDied","Data":"a7fc714aa80f4ca44fb8246d9cae004db0df1aede3268e4937cef0bed664ce25"} Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.737814 4958 scope.go:117] "RemoveContainer" containerID="f4b1d693a8d3400b519696978093c97e505eae4a114b70c0ac7bde3e3a0375b0" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.737809 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dfb96c8bc-nlv9l" Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.760639 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dfb96c8bc-nlv9l"] Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.766792 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dfb96c8bc-nlv9l"] Oct 08 08:12:15 crc kubenswrapper[4958]: I1008 08:12:15.769014 4958 scope.go:117] "RemoveContainer" containerID="1653657e8046bc3cd484fd1d9e373df86d157655b4027134bc565c332fd3fe2b" Oct 08 08:12:17 crc kubenswrapper[4958]: I1008 08:12:17.600830 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" path="/var/lib/kubelet/pods/ec426777-7768-4cce-abd8-2116ed2fb2e4/volumes" Oct 08 08:12:18 crc kubenswrapper[4958]: I1008 08:12:18.002369 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 08:12:18 crc kubenswrapper[4958]: I1008 08:12:18.002444 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 08:12:18 crc kubenswrapper[4958]: I1008 08:12:18.067178 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 08:12:18 crc kubenswrapper[4958]: I1008 08:12:18.092879 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 08:12:18 crc kubenswrapper[4958]: I1008 08:12:18.775349 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 08:12:18 crc kubenswrapper[4958]: I1008 08:12:18.775393 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 08:12:19 crc kubenswrapper[4958]: I1008 08:12:19.047520 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:19 crc kubenswrapper[4958]: I1008 08:12:19.049565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:19 crc kubenswrapper[4958]: I1008 08:12:19.097187 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:19 crc kubenswrapper[4958]: I1008 08:12:19.122654 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:19 crc kubenswrapper[4958]: I1008 08:12:19.784345 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:19 crc kubenswrapper[4958]: I1008 08:12:19.784592 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:20 crc kubenswrapper[4958]: I1008 08:12:20.793739 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 08:12:20 crc kubenswrapper[4958]: I1008 08:12:20.794553 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 08:12:20 crc kubenswrapper[4958]: I1008 08:12:20.814137 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 08:12:21 crc kubenswrapper[4958]: I1008 08:12:21.628565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:21 crc kubenswrapper[4958]: I1008 08:12:21.666177 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.846714 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qnfbz"] Oct 08 08:12:29 crc kubenswrapper[4958]: E1008 08:12:29.847740 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerName="dnsmasq-dns" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.847761 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerName="dnsmasq-dns" Oct 08 08:12:29 crc kubenswrapper[4958]: E1008 08:12:29.847802 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerName="init" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.847810 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerName="init" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.848069 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec426777-7768-4cce-abd8-2116ed2fb2e4" containerName="dnsmasq-dns" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.848660 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qnfbz"] Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.848749 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.877058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j454b\" (UniqueName: \"kubernetes.io/projected/339c698b-4c11-45e7-99b7-6868ac04f0da-kube-api-access-j454b\") pod \"placement-db-create-qnfbz\" (UID: \"339c698b-4c11-45e7-99b7-6868ac04f0da\") " pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:29 crc kubenswrapper[4958]: I1008 08:12:29.978549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j454b\" (UniqueName: \"kubernetes.io/projected/339c698b-4c11-45e7-99b7-6868ac04f0da-kube-api-access-j454b\") pod \"placement-db-create-qnfbz\" (UID: \"339c698b-4c11-45e7-99b7-6868ac04f0da\") " pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:30 crc kubenswrapper[4958]: I1008 08:12:30.008097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j454b\" (UniqueName: \"kubernetes.io/projected/339c698b-4c11-45e7-99b7-6868ac04f0da-kube-api-access-j454b\") pod \"placement-db-create-qnfbz\" (UID: \"339c698b-4c11-45e7-99b7-6868ac04f0da\") " pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:30 crc kubenswrapper[4958]: I1008 08:12:30.202979 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:30 crc kubenswrapper[4958]: I1008 08:12:30.696618 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qnfbz"] Oct 08 08:12:30 crc kubenswrapper[4958]: I1008 08:12:30.902745 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qnfbz" event={"ID":"339c698b-4c11-45e7-99b7-6868ac04f0da","Type":"ContainerStarted","Data":"57f46683d7eeeef46377caf64109dd7981ce2551666e85dd33c9d9993cd0e245"} Oct 08 08:12:31 crc kubenswrapper[4958]: I1008 08:12:31.919044 4958 generic.go:334] "Generic (PLEG): container finished" podID="339c698b-4c11-45e7-99b7-6868ac04f0da" containerID="9d969ec44549014e9979dea3b938e61a52572574dde68a6727eb5a0c087714cc" exitCode=0 Oct 08 08:12:31 crc kubenswrapper[4958]: I1008 08:12:31.919082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qnfbz" event={"ID":"339c698b-4c11-45e7-99b7-6868ac04f0da","Type":"ContainerDied","Data":"9d969ec44549014e9979dea3b938e61a52572574dde68a6727eb5a0c087714cc"} Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.352233 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.451281 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j454b\" (UniqueName: \"kubernetes.io/projected/339c698b-4c11-45e7-99b7-6868ac04f0da-kube-api-access-j454b\") pod \"339c698b-4c11-45e7-99b7-6868ac04f0da\" (UID: \"339c698b-4c11-45e7-99b7-6868ac04f0da\") " Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.463805 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339c698b-4c11-45e7-99b7-6868ac04f0da-kube-api-access-j454b" (OuterVolumeSpecName: "kube-api-access-j454b") pod "339c698b-4c11-45e7-99b7-6868ac04f0da" (UID: "339c698b-4c11-45e7-99b7-6868ac04f0da"). InnerVolumeSpecName "kube-api-access-j454b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.553632 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j454b\" (UniqueName: \"kubernetes.io/projected/339c698b-4c11-45e7-99b7-6868ac04f0da-kube-api-access-j454b\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.946769 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qnfbz" event={"ID":"339c698b-4c11-45e7-99b7-6868ac04f0da","Type":"ContainerDied","Data":"57f46683d7eeeef46377caf64109dd7981ce2551666e85dd33c9d9993cd0e245"} Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.947166 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f46683d7eeeef46377caf64109dd7981ce2551666e85dd33c9d9993cd0e245" Oct 08 08:12:33 crc kubenswrapper[4958]: I1008 08:12:33.946856 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qnfbz" Oct 08 08:12:39 crc kubenswrapper[4958]: I1008 08:12:39.927646 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e266-account-create-lh8tw"] Oct 08 08:12:39 crc kubenswrapper[4958]: E1008 08:12:39.928920 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339c698b-4c11-45e7-99b7-6868ac04f0da" containerName="mariadb-database-create" Oct 08 08:12:39 crc kubenswrapper[4958]: I1008 08:12:39.928968 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="339c698b-4c11-45e7-99b7-6868ac04f0da" containerName="mariadb-database-create" Oct 08 08:12:39 crc kubenswrapper[4958]: I1008 08:12:39.929298 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="339c698b-4c11-45e7-99b7-6868ac04f0da" containerName="mariadb-database-create" Oct 08 08:12:39 crc kubenswrapper[4958]: I1008 08:12:39.930218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:39 crc kubenswrapper[4958]: I1008 08:12:39.933217 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 08 08:12:39 crc kubenswrapper[4958]: I1008 08:12:39.951081 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e266-account-create-lh8tw"] Oct 08 08:12:40 crc kubenswrapper[4958]: I1008 08:12:40.003664 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhfbr\" (UniqueName: \"kubernetes.io/projected/5ad898a7-a520-4df3-a85c-9fa7182286cf-kube-api-access-hhfbr\") pod \"placement-e266-account-create-lh8tw\" (UID: \"5ad898a7-a520-4df3-a85c-9fa7182286cf\") " pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:40 crc kubenswrapper[4958]: I1008 08:12:40.106479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhfbr\" (UniqueName: \"kubernetes.io/projected/5ad898a7-a520-4df3-a85c-9fa7182286cf-kube-api-access-hhfbr\") pod \"placement-e266-account-create-lh8tw\" (UID: \"5ad898a7-a520-4df3-a85c-9fa7182286cf\") " pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:40 crc kubenswrapper[4958]: I1008 08:12:40.135068 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhfbr\" (UniqueName: \"kubernetes.io/projected/5ad898a7-a520-4df3-a85c-9fa7182286cf-kube-api-access-hhfbr\") pod \"placement-e266-account-create-lh8tw\" (UID: \"5ad898a7-a520-4df3-a85c-9fa7182286cf\") " pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:40 crc kubenswrapper[4958]: I1008 08:12:40.265287 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:40 crc kubenswrapper[4958]: I1008 08:12:40.742851 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e266-account-create-lh8tw"] Oct 08 08:12:41 crc kubenswrapper[4958]: I1008 08:12:41.029351 4958 generic.go:334] "Generic (PLEG): container finished" podID="5ad898a7-a520-4df3-a85c-9fa7182286cf" containerID="70d68eea314efdc7641a8d1f8aa8b4dd755495e6ced34fa6780723e2cd47a1f3" exitCode=0 Oct 08 08:12:41 crc kubenswrapper[4958]: I1008 08:12:41.029472 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e266-account-create-lh8tw" event={"ID":"5ad898a7-a520-4df3-a85c-9fa7182286cf","Type":"ContainerDied","Data":"70d68eea314efdc7641a8d1f8aa8b4dd755495e6ced34fa6780723e2cd47a1f3"} Oct 08 08:12:41 crc kubenswrapper[4958]: I1008 08:12:41.029613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e266-account-create-lh8tw" event={"ID":"5ad898a7-a520-4df3-a85c-9fa7182286cf","Type":"ContainerStarted","Data":"3cfdb489d2577d3a9b79ec7522fb6fbe4e0ebf7d89316d3018a6bc9555687ef4"} Oct 08 08:12:42 crc kubenswrapper[4958]: I1008 08:12:42.485004 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:42 crc kubenswrapper[4958]: I1008 08:12:42.594485 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhfbr\" (UniqueName: \"kubernetes.io/projected/5ad898a7-a520-4df3-a85c-9fa7182286cf-kube-api-access-hhfbr\") pod \"5ad898a7-a520-4df3-a85c-9fa7182286cf\" (UID: \"5ad898a7-a520-4df3-a85c-9fa7182286cf\") " Oct 08 08:12:42 crc kubenswrapper[4958]: I1008 08:12:42.603930 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad898a7-a520-4df3-a85c-9fa7182286cf-kube-api-access-hhfbr" (OuterVolumeSpecName: "kube-api-access-hhfbr") pod "5ad898a7-a520-4df3-a85c-9fa7182286cf" (UID: "5ad898a7-a520-4df3-a85c-9fa7182286cf"). InnerVolumeSpecName "kube-api-access-hhfbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:42 crc kubenswrapper[4958]: I1008 08:12:42.698223 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhfbr\" (UniqueName: \"kubernetes.io/projected/5ad898a7-a520-4df3-a85c-9fa7182286cf-kube-api-access-hhfbr\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:43 crc kubenswrapper[4958]: I1008 08:12:43.095482 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e266-account-create-lh8tw" event={"ID":"5ad898a7-a520-4df3-a85c-9fa7182286cf","Type":"ContainerDied","Data":"3cfdb489d2577d3a9b79ec7522fb6fbe4e0ebf7d89316d3018a6bc9555687ef4"} Oct 08 08:12:43 crc kubenswrapper[4958]: I1008 08:12:43.095552 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cfdb489d2577d3a9b79ec7522fb6fbe4e0ebf7d89316d3018a6bc9555687ef4" Oct 08 08:12:43 crc kubenswrapper[4958]: I1008 08:12:43.095639 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e266-account-create-lh8tw" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.357502 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56ccf77c9-l66bq"] Oct 08 08:12:45 crc kubenswrapper[4958]: E1008 08:12:45.358111 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad898a7-a520-4df3-a85c-9fa7182286cf" containerName="mariadb-account-create" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.358126 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad898a7-a520-4df3-a85c-9fa7182286cf" containerName="mariadb-account-create" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.358293 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad898a7-a520-4df3-a85c-9fa7182286cf" containerName="mariadb-account-create" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.360517 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.377820 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-r5fgj"] Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.379048 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.381588 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4fqxf" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.384359 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.384361 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.390238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56ccf77c9-l66bq"] Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.419240 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-r5fgj"] Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.459522 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-sb\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.459782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8bk\" (UniqueName: \"kubernetes.io/projected/e57b307b-5ae0-4e2d-8037-e935f756b3ff-kube-api-access-gb8bk\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.459901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-combined-ca-bundle\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-dns-svc\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4hc\" (UniqueName: \"kubernetes.io/projected/9be2f105-5dd5-432a-82b6-56410f76db3f-kube-api-access-4v4hc\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460310 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57b307b-5ae0-4e2d-8037-e935f756b3ff-logs\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460386 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-nb\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460512 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-config\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-config-data\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.460792 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-scripts\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.561379 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-sb\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.561549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8bk\" (UniqueName: \"kubernetes.io/projected/e57b307b-5ae0-4e2d-8037-e935f756b3ff-kube-api-access-gb8bk\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.561642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-combined-ca-bundle\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.561720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-dns-svc\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.561802 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4hc\" (UniqueName: \"kubernetes.io/projected/9be2f105-5dd5-432a-82b6-56410f76db3f-kube-api-access-4v4hc\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.561910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57b307b-5ae0-4e2d-8037-e935f756b3ff-logs\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.562027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-nb\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.562389 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-sb\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.562386 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57b307b-5ae0-4e2d-8037-e935f756b3ff-logs\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.562592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-dns-svc\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.563198 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-nb\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.563215 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-config\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.563426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-config\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.563557 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-config-data\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.563675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-scripts\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.566924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-combined-ca-bundle\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.568501 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-config-data\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.576622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-scripts\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.579427 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8bk\" (UniqueName: \"kubernetes.io/projected/e57b307b-5ae0-4e2d-8037-e935f756b3ff-kube-api-access-gb8bk\") pod \"placement-db-sync-r5fgj\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.581869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4hc\" (UniqueName: \"kubernetes.io/projected/9be2f105-5dd5-432a-82b6-56410f76db3f-kube-api-access-4v4hc\") pod \"dnsmasq-dns-56ccf77c9-l66bq\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.678341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.694149 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:45 crc kubenswrapper[4958]: I1008 08:12:45.961920 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-r5fgj"] Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.126745 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2c8j9"] Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.128646 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.135057 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2c8j9"] Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.135316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-r5fgj" event={"ID":"e57b307b-5ae0-4e2d-8037-e935f756b3ff","Type":"ContainerStarted","Data":"c716bc981ebf4c72c28cf25770c093f564cb570fecfa634e2562773618c11c8d"} Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.187805 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-catalog-content\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.188142 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqvd\" (UniqueName: \"kubernetes.io/projected/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-kube-api-access-rfqvd\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.188169 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-utilities\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.289989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-catalog-content\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.290072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqvd\" (UniqueName: \"kubernetes.io/projected/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-kube-api-access-rfqvd\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.290100 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-utilities\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.290566 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-catalog-content\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.290639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-utilities\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: W1008 08:12:46.302057 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9be2f105_5dd5_432a_82b6_56410f76db3f.slice/crio-f9fc889d98fd8c62405c3fee8c8cc25956e08b29136b58218245e8af926f60c9 WatchSource:0}: Error finding container f9fc889d98fd8c62405c3fee8c8cc25956e08b29136b58218245e8af926f60c9: Status 404 returned error can't find the container with id f9fc889d98fd8c62405c3fee8c8cc25956e08b29136b58218245e8af926f60c9 Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.305708 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56ccf77c9-l66bq"] Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.329681 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqvd\" (UniqueName: \"kubernetes.io/projected/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-kube-api-access-rfqvd\") pod \"redhat-marketplace-2c8j9\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.445434 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:46 crc kubenswrapper[4958]: I1008 08:12:46.702178 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2c8j9"] Oct 08 08:12:46 crc kubenswrapper[4958]: W1008 08:12:46.708744 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fcd0569_86c6_4a8d_a832_13be0a5f1c84.slice/crio-dbb9da6d2a968781968fc12cbfca7574785d26734a0cc0378b4cb03496fd0397 WatchSource:0}: Error finding container dbb9da6d2a968781968fc12cbfca7574785d26734a0cc0378b4cb03496fd0397: Status 404 returned error can't find the container with id dbb9da6d2a968781968fc12cbfca7574785d26734a0cc0378b4cb03496fd0397 Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.149594 4958 generic.go:334] "Generic (PLEG): container finished" podID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerID="8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86" exitCode=0 Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.149702 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2c8j9" event={"ID":"7fcd0569-86c6-4a8d-a832-13be0a5f1c84","Type":"ContainerDied","Data":"8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86"} Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.150139 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2c8j9" event={"ID":"7fcd0569-86c6-4a8d-a832-13be0a5f1c84","Type":"ContainerStarted","Data":"dbb9da6d2a968781968fc12cbfca7574785d26734a0cc0378b4cb03496fd0397"} Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.154106 4958 generic.go:334] "Generic (PLEG): container finished" podID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerID="43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2" exitCode=0 Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.154195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" event={"ID":"9be2f105-5dd5-432a-82b6-56410f76db3f","Type":"ContainerDied","Data":"43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2"} Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.154251 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" event={"ID":"9be2f105-5dd5-432a-82b6-56410f76db3f","Type":"ContainerStarted","Data":"f9fc889d98fd8c62405c3fee8c8cc25956e08b29136b58218245e8af926f60c9"} Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.161771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-r5fgj" event={"ID":"e57b307b-5ae0-4e2d-8037-e935f756b3ff","Type":"ContainerStarted","Data":"38ef46d28e2c7bbf5126ea22d5ba102ec01cf0ad470fc3ccc2cbc0cd305de992"} Oct 08 08:12:47 crc kubenswrapper[4958]: I1008 08:12:47.206371 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-r5fgj" podStartSLOduration=2.206349022 podStartE2EDuration="2.206349022s" podCreationTimestamp="2025-10-08 08:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:47.1970405 +0000 UTC m=+5910.326733121" watchObservedRunningTime="2025-10-08 08:12:47.206349022 +0000 UTC m=+5910.336041633" Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.173274 4958 generic.go:334] "Generic (PLEG): container finished" podID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerID="60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530" exitCode=0 Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.173344 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2c8j9" event={"ID":"7fcd0569-86c6-4a8d-a832-13be0a5f1c84","Type":"ContainerDied","Data":"60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530"} Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.181528 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" event={"ID":"9be2f105-5dd5-432a-82b6-56410f76db3f","Type":"ContainerStarted","Data":"0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9"} Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.181703 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.187119 4958 generic.go:334] "Generic (PLEG): container finished" podID="e57b307b-5ae0-4e2d-8037-e935f756b3ff" containerID="38ef46d28e2c7bbf5126ea22d5ba102ec01cf0ad470fc3ccc2cbc0cd305de992" exitCode=0 Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.187181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-r5fgj" event={"ID":"e57b307b-5ae0-4e2d-8037-e935f756b3ff","Type":"ContainerDied","Data":"38ef46d28e2c7bbf5126ea22d5ba102ec01cf0ad470fc3ccc2cbc0cd305de992"} Oct 08 08:12:48 crc kubenswrapper[4958]: I1008 08:12:48.247303 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" podStartSLOduration=3.247277589 podStartE2EDuration="3.247277589s" podCreationTimestamp="2025-10-08 08:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:48.240878505 +0000 UTC m=+5911.370571126" watchObservedRunningTime="2025-10-08 08:12:48.247277589 +0000 UTC m=+5911.376970220" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.212234 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2c8j9" event={"ID":"7fcd0569-86c6-4a8d-a832-13be0a5f1c84","Type":"ContainerStarted","Data":"93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710"} Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.244842 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2c8j9" podStartSLOduration=1.579480093 podStartE2EDuration="3.244809449s" podCreationTimestamp="2025-10-08 08:12:46 +0000 UTC" firstStartedPulling="2025-10-08 08:12:47.153708305 +0000 UTC m=+5910.283400906" lastFinishedPulling="2025-10-08 08:12:48.819037621 +0000 UTC m=+5911.948730262" observedRunningTime="2025-10-08 08:12:49.233681447 +0000 UTC m=+5912.363374118" watchObservedRunningTime="2025-10-08 08:12:49.244809449 +0000 UTC m=+5912.374502090" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.556114 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.709644 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-scripts\") pod \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.709719 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57b307b-5ae0-4e2d-8037-e935f756b3ff-logs\") pod \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.709755 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-config-data\") pod \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.709829 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-combined-ca-bundle\") pod \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.709867 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb8bk\" (UniqueName: \"kubernetes.io/projected/e57b307b-5ae0-4e2d-8037-e935f756b3ff-kube-api-access-gb8bk\") pod \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\" (UID: \"e57b307b-5ae0-4e2d-8037-e935f756b3ff\") " Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.710087 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e57b307b-5ae0-4e2d-8037-e935f756b3ff-logs" (OuterVolumeSpecName: "logs") pod "e57b307b-5ae0-4e2d-8037-e935f756b3ff" (UID: "e57b307b-5ae0-4e2d-8037-e935f756b3ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.710323 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e57b307b-5ae0-4e2d-8037-e935f756b3ff-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.715870 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-scripts" (OuterVolumeSpecName: "scripts") pod "e57b307b-5ae0-4e2d-8037-e935f756b3ff" (UID: "e57b307b-5ae0-4e2d-8037-e935f756b3ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.717237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57b307b-5ae0-4e2d-8037-e935f756b3ff-kube-api-access-gb8bk" (OuterVolumeSpecName: "kube-api-access-gb8bk") pod "e57b307b-5ae0-4e2d-8037-e935f756b3ff" (UID: "e57b307b-5ae0-4e2d-8037-e935f756b3ff"). InnerVolumeSpecName "kube-api-access-gb8bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.737537 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-config-data" (OuterVolumeSpecName: "config-data") pod "e57b307b-5ae0-4e2d-8037-e935f756b3ff" (UID: "e57b307b-5ae0-4e2d-8037-e935f756b3ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.752851 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e57b307b-5ae0-4e2d-8037-e935f756b3ff" (UID: "e57b307b-5ae0-4e2d-8037-e935f756b3ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.811758 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.812131 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb8bk\" (UniqueName: \"kubernetes.io/projected/e57b307b-5ae0-4e2d-8037-e935f756b3ff-kube-api-access-gb8bk\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.812145 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:49 crc kubenswrapper[4958]: I1008 08:12:49.812154 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e57b307b-5ae0-4e2d-8037-e935f756b3ff-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.225676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-r5fgj" event={"ID":"e57b307b-5ae0-4e2d-8037-e935f756b3ff","Type":"ContainerDied","Data":"c716bc981ebf4c72c28cf25770c093f564cb570fecfa634e2562773618c11c8d"} Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.225732 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c716bc981ebf4c72c28cf25770c093f564cb570fecfa634e2562773618c11c8d" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.227133 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-r5fgj" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.334234 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9c4cb9494-k7nws"] Oct 08 08:12:50 crc kubenswrapper[4958]: E1008 08:12:50.336119 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57b307b-5ae0-4e2d-8037-e935f756b3ff" containerName="placement-db-sync" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.336161 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57b307b-5ae0-4e2d-8037-e935f756b3ff" containerName="placement-db-sync" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.336479 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57b307b-5ae0-4e2d-8037-e935f756b3ff" containerName="placement-db-sync" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.337753 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.341602 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.342000 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.342234 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.342409 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4fqxf" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.342712 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.367704 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9c4cb9494-k7nws"] Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.524548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d29e82-7666-4eec-bb3f-e3b15e01740d-logs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.524735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-public-tls-certs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.525166 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8np\" (UniqueName: \"kubernetes.io/projected/d7d29e82-7666-4eec-bb3f-e3b15e01740d-kube-api-access-lk8np\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.525226 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-scripts\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.525274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-internal-tls-certs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.525460 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-combined-ca-bundle\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.525519 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-config-data\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.627806 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8np\" (UniqueName: \"kubernetes.io/projected/d7d29e82-7666-4eec-bb3f-e3b15e01740d-kube-api-access-lk8np\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.627858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-scripts\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.627891 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-internal-tls-certs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.627989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-combined-ca-bundle\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.628014 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-config-data\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.628065 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d29e82-7666-4eec-bb3f-e3b15e01740d-logs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.628114 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-public-tls-certs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.629211 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d29e82-7666-4eec-bb3f-e3b15e01740d-logs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.634811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-internal-tls-certs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.637161 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-config-data\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.637425 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-scripts\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.639817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-public-tls-certs\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.642589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d29e82-7666-4eec-bb3f-e3b15e01740d-combined-ca-bundle\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.657527 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8np\" (UniqueName: \"kubernetes.io/projected/d7d29e82-7666-4eec-bb3f-e3b15e01740d-kube-api-access-lk8np\") pod \"placement-9c4cb9494-k7nws\" (UID: \"d7d29e82-7666-4eec-bb3f-e3b15e01740d\") " pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.663641 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:50 crc kubenswrapper[4958]: I1008 08:12:50.991785 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9c4cb9494-k7nws"] Oct 08 08:12:51 crc kubenswrapper[4958]: I1008 08:12:51.236855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9c4cb9494-k7nws" event={"ID":"d7d29e82-7666-4eec-bb3f-e3b15e01740d","Type":"ContainerStarted","Data":"760d6bce9db70ce4d5c38c358e39cb481e26b4ff9cb028432ff411b286fc750b"} Oct 08 08:12:52 crc kubenswrapper[4958]: I1008 08:12:52.251287 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9c4cb9494-k7nws" event={"ID":"d7d29e82-7666-4eec-bb3f-e3b15e01740d","Type":"ContainerStarted","Data":"02f3638b424dc7709a9f7b466f0c320367b4bf0a09151a62a062536971d6599a"} Oct 08 08:12:52 crc kubenswrapper[4958]: I1008 08:12:52.251373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9c4cb9494-k7nws" event={"ID":"d7d29e82-7666-4eec-bb3f-e3b15e01740d","Type":"ContainerStarted","Data":"416f5c72d2e911c04787379b8c61543c4ccd8497c9f4217af379f11618699ec1"} Oct 08 08:12:52 crc kubenswrapper[4958]: I1008 08:12:52.251416 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:52 crc kubenswrapper[4958]: I1008 08:12:52.251445 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:12:52 crc kubenswrapper[4958]: I1008 08:12:52.288126 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9c4cb9494-k7nws" podStartSLOduration=2.288104126 podStartE2EDuration="2.288104126s" podCreationTimestamp="2025-10-08 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:12:52.27385791 +0000 UTC m=+5915.403550551" watchObservedRunningTime="2025-10-08 08:12:52.288104126 +0000 UTC m=+5915.417796737" Oct 08 08:12:55 crc kubenswrapper[4958]: I1008 08:12:55.680265 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:12:55 crc kubenswrapper[4958]: I1008 08:12:55.766491 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7855b8cc9-txhs2"] Oct 08 08:12:55 crc kubenswrapper[4958]: I1008 08:12:55.766846 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerName="dnsmasq-dns" containerID="cri-o://d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8" gracePeriod=10 Oct 08 08:12:56 crc kubenswrapper[4958]: E1008 08:12:56.035099 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod087b2919_a9bb_46fa_8bb9_5f4b8cc81fe5.slice/crio-d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod087b2919_a9bb_46fa_8bb9_5f4b8cc81fe5.slice/crio-conmon-d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8.scope\": RecentStats: unable to find data in memory cache]" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.279799 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.296377 4958 generic.go:334] "Generic (PLEG): container finished" podID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerID="d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8" exitCode=0 Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.296414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" event={"ID":"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5","Type":"ContainerDied","Data":"d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8"} Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.296448 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" event={"ID":"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5","Type":"ContainerDied","Data":"87445be327c42f89b067a2295dad53e63326db65d10cbf0b6bf6cd9a721dea85"} Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.296468 4958 scope.go:117] "RemoveContainer" containerID="d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.296503 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7855b8cc9-txhs2" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.320495 4958 scope.go:117] "RemoveContainer" containerID="84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.340568 4958 scope.go:117] "RemoveContainer" containerID="d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8" Oct 08 08:12:56 crc kubenswrapper[4958]: E1008 08:12:56.341023 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8\": container with ID starting with d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8 not found: ID does not exist" containerID="d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.341059 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8"} err="failed to get container status \"d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8\": rpc error: code = NotFound desc = could not find container \"d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8\": container with ID starting with d3a7c4d9115739b8134dabbd560daef61515f1e0ddbeabfbe74425da581e7ed8 not found: ID does not exist" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.341105 4958 scope.go:117] "RemoveContainer" containerID="84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe" Oct 08 08:12:56 crc kubenswrapper[4958]: E1008 08:12:56.341390 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe\": container with ID starting with 84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe not found: ID does not exist" containerID="84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.341416 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe"} err="failed to get container status \"84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe\": rpc error: code = NotFound desc = could not find container \"84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe\": container with ID starting with 84d24d4af8729189dd900b4cf4a0c610e85462e02d6717a9f9165b42c34b47fe not found: ID does not exist" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.357746 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-config\") pod \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.357813 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l68h8\" (UniqueName: \"kubernetes.io/projected/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-kube-api-access-l68h8\") pod \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.357901 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-nb\") pod \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.358053 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-sb\") pod \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.358093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-dns-svc\") pod \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\" (UID: \"087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5\") " Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.363237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-kube-api-access-l68h8" (OuterVolumeSpecName: "kube-api-access-l68h8") pod "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" (UID: "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5"). InnerVolumeSpecName "kube-api-access-l68h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.404621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" (UID: "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.405785 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-config" (OuterVolumeSpecName: "config") pod "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" (UID: "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.424456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" (UID: "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.434658 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" (UID: "087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.445727 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.447206 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.460909 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.460965 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.460980 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.460992 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l68h8\" (UniqueName: \"kubernetes.io/projected/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-kube-api-access-l68h8\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.461005 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.521863 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.641110 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7855b8cc9-txhs2"] Oct 08 08:12:56 crc kubenswrapper[4958]: I1008 08:12:56.650061 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7855b8cc9-txhs2"] Oct 08 08:12:57 crc kubenswrapper[4958]: I1008 08:12:57.391798 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:12:57 crc kubenswrapper[4958]: I1008 08:12:57.459559 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2c8j9"] Oct 08 08:12:57 crc kubenswrapper[4958]: I1008 08:12:57.598651 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" path="/var/lib/kubelet/pods/087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5/volumes" Oct 08 08:12:59 crc kubenswrapper[4958]: I1008 08:12:59.339789 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2c8j9" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="registry-server" containerID="cri-o://93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710" gracePeriod=2 Oct 08 08:12:59 crc kubenswrapper[4958]: I1008 08:12:59.913710 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.044040 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqvd\" (UniqueName: \"kubernetes.io/projected/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-kube-api-access-rfqvd\") pod \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.044211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-catalog-content\") pod \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.044311 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-utilities\") pod \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\" (UID: \"7fcd0569-86c6-4a8d-a832-13be0a5f1c84\") " Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.045789 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-utilities" (OuterVolumeSpecName: "utilities") pod "7fcd0569-86c6-4a8d-a832-13be0a5f1c84" (UID: "7fcd0569-86c6-4a8d-a832-13be0a5f1c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.059812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-kube-api-access-rfqvd" (OuterVolumeSpecName: "kube-api-access-rfqvd") pod "7fcd0569-86c6-4a8d-a832-13be0a5f1c84" (UID: "7fcd0569-86c6-4a8d-a832-13be0a5f1c84"). InnerVolumeSpecName "kube-api-access-rfqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.070666 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fcd0569-86c6-4a8d-a832-13be0a5f1c84" (UID: "7fcd0569-86c6-4a8d-a832-13be0a5f1c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.146512 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.146581 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqvd\" (UniqueName: \"kubernetes.io/projected/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-kube-api-access-rfqvd\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.146609 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fcd0569-86c6-4a8d-a832-13be0a5f1c84-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.357597 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2c8j9" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.357612 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2c8j9" event={"ID":"7fcd0569-86c6-4a8d-a832-13be0a5f1c84","Type":"ContainerDied","Data":"93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710"} Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.357531 4958 generic.go:334] "Generic (PLEG): container finished" podID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerID="93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710" exitCode=0 Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.358281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2c8j9" event={"ID":"7fcd0569-86c6-4a8d-a832-13be0a5f1c84","Type":"ContainerDied","Data":"dbb9da6d2a968781968fc12cbfca7574785d26734a0cc0378b4cb03496fd0397"} Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.358329 4958 scope.go:117] "RemoveContainer" containerID="93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.393721 4958 scope.go:117] "RemoveContainer" containerID="60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.423117 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2c8j9"] Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.435371 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2c8j9"] Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.453682 4958 scope.go:117] "RemoveContainer" containerID="8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.498086 4958 scope.go:117] "RemoveContainer" containerID="93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710" Oct 08 08:13:00 crc kubenswrapper[4958]: E1008 08:13:00.498736 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710\": container with ID starting with 93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710 not found: ID does not exist" containerID="93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.498797 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710"} err="failed to get container status \"93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710\": rpc error: code = NotFound desc = could not find container \"93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710\": container with ID starting with 93aeb2577b7567651fdbe84be732bade605fa6b0ec028e0ed311faae37423710 not found: ID does not exist" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.498831 4958 scope.go:117] "RemoveContainer" containerID="60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530" Oct 08 08:13:00 crc kubenswrapper[4958]: E1008 08:13:00.499443 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530\": container with ID starting with 60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530 not found: ID does not exist" containerID="60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.499532 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530"} err="failed to get container status \"60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530\": rpc error: code = NotFound desc = could not find container \"60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530\": container with ID starting with 60bfd1d0af46d9e091fa75e6bc96a640ee15b7c62cd34bf49a4bcf378e119530 not found: ID does not exist" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.499590 4958 scope.go:117] "RemoveContainer" containerID="8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86" Oct 08 08:13:00 crc kubenswrapper[4958]: E1008 08:13:00.514291 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86\": container with ID starting with 8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86 not found: ID does not exist" containerID="8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86" Oct 08 08:13:00 crc kubenswrapper[4958]: I1008 08:13:00.514336 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86"} err="failed to get container status \"8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86\": rpc error: code = NotFound desc = could not find container \"8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86\": container with ID starting with 8d4cf15c70d70a2344dd6eac77eb11f8ff6feb09b50d7720f46c347be9e7de86 not found: ID does not exist" Oct 08 08:13:01 crc kubenswrapper[4958]: I1008 08:13:01.594091 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" path="/var/lib/kubelet/pods/7fcd0569-86c6-4a8d-a832-13be0a5f1c84/volumes" Oct 08 08:13:21 crc kubenswrapper[4958]: I1008 08:13:21.634982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:13:21 crc kubenswrapper[4958]: I1008 08:13:21.635565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9c4cb9494-k7nws" Oct 08 08:13:36 crc kubenswrapper[4958]: I1008 08:13:36.844834 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:13:36 crc kubenswrapper[4958]: I1008 08:13:36.845521 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.537127 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hvmsz"] Oct 08 08:13:44 crc kubenswrapper[4958]: E1008 08:13:44.537930 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="extract-utilities" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.537957 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="extract-utilities" Oct 08 08:13:44 crc kubenswrapper[4958]: E1008 08:13:44.537972 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerName="dnsmasq-dns" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.537978 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerName="dnsmasq-dns" Oct 08 08:13:44 crc kubenswrapper[4958]: E1008 08:13:44.537997 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="extract-content" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.538002 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="extract-content" Oct 08 08:13:44 crc kubenswrapper[4958]: E1008 08:13:44.538024 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="registry-server" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.538030 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="registry-server" Oct 08 08:13:44 crc kubenswrapper[4958]: E1008 08:13:44.538042 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerName="init" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.538048 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerName="init" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.538197 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fcd0569-86c6-4a8d-a832-13be0a5f1c84" containerName="registry-server" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.538219 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="087b2919-a9bb-46fa-8bb9-5f4b8cc81fe5" containerName="dnsmasq-dns" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.538844 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.549634 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hvmsz"] Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.630670 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-sp82f"] Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.632316 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.638892 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sp82f"] Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.652581 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlkc\" (UniqueName: \"kubernetes.io/projected/e46d65ce-b975-425a-bea6-802c40beed1b-kube-api-access-qzlkc\") pod \"nova-api-db-create-hvmsz\" (UID: \"e46d65ce-b975-425a-bea6-802c40beed1b\") " pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.753879 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzr9v\" (UniqueName: \"kubernetes.io/projected/e73ec4fb-e99a-4974-92fb-86a6339d686c-kube-api-access-lzr9v\") pod \"nova-cell0-db-create-sp82f\" (UID: \"e73ec4fb-e99a-4974-92fb-86a6339d686c\") " pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.754003 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlkc\" (UniqueName: \"kubernetes.io/projected/e46d65ce-b975-425a-bea6-802c40beed1b-kube-api-access-qzlkc\") pod \"nova-api-db-create-hvmsz\" (UID: \"e46d65ce-b975-425a-bea6-802c40beed1b\") " pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.771556 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlkc\" (UniqueName: \"kubernetes.io/projected/e46d65ce-b975-425a-bea6-802c40beed1b-kube-api-access-qzlkc\") pod \"nova-api-db-create-hvmsz\" (UID: \"e46d65ce-b975-425a-bea6-802c40beed1b\") " pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.841524 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fnsjn"] Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.843679 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.857337 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzr9v\" (UniqueName: \"kubernetes.io/projected/e73ec4fb-e99a-4974-92fb-86a6339d686c-kube-api-access-lzr9v\") pod \"nova-cell0-db-create-sp82f\" (UID: \"e73ec4fb-e99a-4974-92fb-86a6339d686c\") " pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.872299 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fnsjn"] Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.874100 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.884774 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzr9v\" (UniqueName: \"kubernetes.io/projected/e73ec4fb-e99a-4974-92fb-86a6339d686c-kube-api-access-lzr9v\") pod \"nova-cell0-db-create-sp82f\" (UID: \"e73ec4fb-e99a-4974-92fb-86a6339d686c\") " pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.946313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:44 crc kubenswrapper[4958]: I1008 08:13:44.973988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75mjv\" (UniqueName: \"kubernetes.io/projected/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72-kube-api-access-75mjv\") pod \"nova-cell1-db-create-fnsjn\" (UID: \"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72\") " pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.075636 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75mjv\" (UniqueName: \"kubernetes.io/projected/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72-kube-api-access-75mjv\") pod \"nova-cell1-db-create-fnsjn\" (UID: \"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72\") " pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.095001 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75mjv\" (UniqueName: \"kubernetes.io/projected/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72-kube-api-access-75mjv\") pod \"nova-cell1-db-create-fnsjn\" (UID: \"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72\") " pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.169407 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.320731 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hvmsz"] Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.412471 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fnsjn"] Oct 08 08:13:45 crc kubenswrapper[4958]: W1008 08:13:45.414069 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b2a2e1a_ed98_4e3d_be11_cd7b668abc72.slice/crio-13526b0bdb3d078524bd4d7fa29b6e308e63b9a3b6891d4f83f82d4d6d351698 WatchSource:0}: Error finding container 13526b0bdb3d078524bd4d7fa29b6e308e63b9a3b6891d4f83f82d4d6d351698: Status 404 returned error can't find the container with id 13526b0bdb3d078524bd4d7fa29b6e308e63b9a3b6891d4f83f82d4d6d351698 Oct 08 08:13:45 crc kubenswrapper[4958]: W1008 08:13:45.423500 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode73ec4fb_e99a_4974_92fb_86a6339d686c.slice/crio-394fb6ca00b865761ee465ef85a1a084bc509bbfb811aef1cd00e0a0d4089b5e WatchSource:0}: Error finding container 394fb6ca00b865761ee465ef85a1a084bc509bbfb811aef1cd00e0a0d4089b5e: Status 404 returned error can't find the container with id 394fb6ca00b865761ee465ef85a1a084bc509bbfb811aef1cd00e0a0d4089b5e Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.424141 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sp82f"] Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.905835 4958 generic.go:334] "Generic (PLEG): container finished" podID="4b2a2e1a-ed98-4e3d-be11-cd7b668abc72" containerID="f26ad18ce56ad72bda7c90168ba980f98fab3aa6be7be10d5f8b91fc1ee235ed" exitCode=0 Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.905899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fnsjn" event={"ID":"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72","Type":"ContainerDied","Data":"f26ad18ce56ad72bda7c90168ba980f98fab3aa6be7be10d5f8b91fc1ee235ed"} Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.905978 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fnsjn" event={"ID":"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72","Type":"ContainerStarted","Data":"13526b0bdb3d078524bd4d7fa29b6e308e63b9a3b6891d4f83f82d4d6d351698"} Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.907796 4958 generic.go:334] "Generic (PLEG): container finished" podID="e73ec4fb-e99a-4974-92fb-86a6339d686c" containerID="c0a7854764001634a4f783a5e28f996fd7ab7ad772016e0b964f80196ca5ced3" exitCode=0 Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.907853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sp82f" event={"ID":"e73ec4fb-e99a-4974-92fb-86a6339d686c","Type":"ContainerDied","Data":"c0a7854764001634a4f783a5e28f996fd7ab7ad772016e0b964f80196ca5ced3"} Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.907877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sp82f" event={"ID":"e73ec4fb-e99a-4974-92fb-86a6339d686c","Type":"ContainerStarted","Data":"394fb6ca00b865761ee465ef85a1a084bc509bbfb811aef1cd00e0a0d4089b5e"} Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.909325 4958 generic.go:334] "Generic (PLEG): container finished" podID="e46d65ce-b975-425a-bea6-802c40beed1b" containerID="50f561a3bb3782d7dad8661353b16e498557817022b89f9a69f917e6c9a445a7" exitCode=0 Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.909365 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hvmsz" event={"ID":"e46d65ce-b975-425a-bea6-802c40beed1b","Type":"ContainerDied","Data":"50f561a3bb3782d7dad8661353b16e498557817022b89f9a69f917e6c9a445a7"} Oct 08 08:13:45 crc kubenswrapper[4958]: I1008 08:13:45.909455 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hvmsz" event={"ID":"e46d65ce-b975-425a-bea6-802c40beed1b","Type":"ContainerStarted","Data":"c5a5a1f34dba7f8db175d9b33083a863204ef4f2c6484af50c41d45c241e68d0"} Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.404175 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.413712 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.415883 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.523252 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzr9v\" (UniqueName: \"kubernetes.io/projected/e73ec4fb-e99a-4974-92fb-86a6339d686c-kube-api-access-lzr9v\") pod \"e73ec4fb-e99a-4974-92fb-86a6339d686c\" (UID: \"e73ec4fb-e99a-4974-92fb-86a6339d686c\") " Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.523297 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzlkc\" (UniqueName: \"kubernetes.io/projected/e46d65ce-b975-425a-bea6-802c40beed1b-kube-api-access-qzlkc\") pod \"e46d65ce-b975-425a-bea6-802c40beed1b\" (UID: \"e46d65ce-b975-425a-bea6-802c40beed1b\") " Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.523335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75mjv\" (UniqueName: \"kubernetes.io/projected/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72-kube-api-access-75mjv\") pod \"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72\" (UID: \"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72\") " Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.528128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73ec4fb-e99a-4974-92fb-86a6339d686c-kube-api-access-lzr9v" (OuterVolumeSpecName: "kube-api-access-lzr9v") pod "e73ec4fb-e99a-4974-92fb-86a6339d686c" (UID: "e73ec4fb-e99a-4974-92fb-86a6339d686c"). InnerVolumeSpecName "kube-api-access-lzr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.528550 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e46d65ce-b975-425a-bea6-802c40beed1b-kube-api-access-qzlkc" (OuterVolumeSpecName: "kube-api-access-qzlkc") pod "e46d65ce-b975-425a-bea6-802c40beed1b" (UID: "e46d65ce-b975-425a-bea6-802c40beed1b"). InnerVolumeSpecName "kube-api-access-qzlkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.528719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72-kube-api-access-75mjv" (OuterVolumeSpecName: "kube-api-access-75mjv") pod "4b2a2e1a-ed98-4e3d-be11-cd7b668abc72" (UID: "4b2a2e1a-ed98-4e3d-be11-cd7b668abc72"). InnerVolumeSpecName "kube-api-access-75mjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.624771 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzr9v\" (UniqueName: \"kubernetes.io/projected/e73ec4fb-e99a-4974-92fb-86a6339d686c-kube-api-access-lzr9v\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.624800 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzlkc\" (UniqueName: \"kubernetes.io/projected/e46d65ce-b975-425a-bea6-802c40beed1b-kube-api-access-qzlkc\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.624812 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75mjv\" (UniqueName: \"kubernetes.io/projected/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72-kube-api-access-75mjv\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.934611 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hvmsz" event={"ID":"e46d65ce-b975-425a-bea6-802c40beed1b","Type":"ContainerDied","Data":"c5a5a1f34dba7f8db175d9b33083a863204ef4f2c6484af50c41d45c241e68d0"} Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.934698 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a5a1f34dba7f8db175d9b33083a863204ef4f2c6484af50c41d45c241e68d0" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.935425 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hvmsz" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.937874 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fnsjn" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.937850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fnsjn" event={"ID":"4b2a2e1a-ed98-4e3d-be11-cd7b668abc72","Type":"ContainerDied","Data":"13526b0bdb3d078524bd4d7fa29b6e308e63b9a3b6891d4f83f82d4d6d351698"} Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.938037 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13526b0bdb3d078524bd4d7fa29b6e308e63b9a3b6891d4f83f82d4d6d351698" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.941072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sp82f" event={"ID":"e73ec4fb-e99a-4974-92fb-86a6339d686c","Type":"ContainerDied","Data":"394fb6ca00b865761ee465ef85a1a084bc509bbfb811aef1cd00e0a0d4089b5e"} Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.941116 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="394fb6ca00b865761ee465ef85a1a084bc509bbfb811aef1cd00e0a0d4089b5e" Oct 08 08:13:47 crc kubenswrapper[4958]: I1008 08:13:47.941187 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sp82f" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.685938 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1b00-account-create-c2wmn"] Oct 08 08:13:54 crc kubenswrapper[4958]: E1008 08:13:54.687704 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2a2e1a-ed98-4e3d-be11-cd7b668abc72" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.687740 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2a2e1a-ed98-4e3d-be11-cd7b668abc72" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: E1008 08:13:54.687810 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73ec4fb-e99a-4974-92fb-86a6339d686c" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.687829 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73ec4fb-e99a-4974-92fb-86a6339d686c" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: E1008 08:13:54.687861 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e46d65ce-b975-425a-bea6-802c40beed1b" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.687878 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e46d65ce-b975-425a-bea6-802c40beed1b" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.688523 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2a2e1a-ed98-4e3d-be11-cd7b668abc72" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.688575 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73ec4fb-e99a-4974-92fb-86a6339d686c" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.688634 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e46d65ce-b975-425a-bea6-802c40beed1b" containerName="mariadb-database-create" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.689995 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.693213 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.704054 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1b00-account-create-c2wmn"] Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.831192 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48slw\" (UniqueName: \"kubernetes.io/projected/48aeb23a-d59a-4ebe-9abf-82760945f25c-kube-api-access-48slw\") pod \"nova-api-1b00-account-create-c2wmn\" (UID: \"48aeb23a-d59a-4ebe-9abf-82760945f25c\") " pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.886299 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cf8f-account-create-cqsdt"] Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.888368 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.891889 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.900472 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf8f-account-create-cqsdt"] Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.933453 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s94tm\" (UniqueName: \"kubernetes.io/projected/0195dd3c-dfc6-46a2-ba6f-54de2167410d-kube-api-access-s94tm\") pod \"nova-cell0-cf8f-account-create-cqsdt\" (UID: \"0195dd3c-dfc6-46a2-ba6f-54de2167410d\") " pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.933540 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48slw\" (UniqueName: \"kubernetes.io/projected/48aeb23a-d59a-4ebe-9abf-82760945f25c-kube-api-access-48slw\") pod \"nova-api-1b00-account-create-c2wmn\" (UID: \"48aeb23a-d59a-4ebe-9abf-82760945f25c\") " pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:54 crc kubenswrapper[4958]: I1008 08:13:54.972888 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48slw\" (UniqueName: \"kubernetes.io/projected/48aeb23a-d59a-4ebe-9abf-82760945f25c-kube-api-access-48slw\") pod \"nova-api-1b00-account-create-c2wmn\" (UID: \"48aeb23a-d59a-4ebe-9abf-82760945f25c\") " pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.033613 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.035525 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s94tm\" (UniqueName: \"kubernetes.io/projected/0195dd3c-dfc6-46a2-ba6f-54de2167410d-kube-api-access-s94tm\") pod \"nova-cell0-cf8f-account-create-cqsdt\" (UID: \"0195dd3c-dfc6-46a2-ba6f-54de2167410d\") " pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.063793 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s94tm\" (UniqueName: \"kubernetes.io/projected/0195dd3c-dfc6-46a2-ba6f-54de2167410d-kube-api-access-s94tm\") pod \"nova-cell0-cf8f-account-create-cqsdt\" (UID: \"0195dd3c-dfc6-46a2-ba6f-54de2167410d\") " pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.089548 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-aa36-account-create-rkzt5"] Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.093252 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.097214 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.114491 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aa36-account-create-rkzt5"] Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.137966 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2nf\" (UniqueName: \"kubernetes.io/projected/4cd0af62-9469-44e9-a330-83bcaa079b79-kube-api-access-ls2nf\") pod \"nova-cell1-aa36-account-create-rkzt5\" (UID: \"4cd0af62-9469-44e9-a330-83bcaa079b79\") " pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.222988 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.239804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls2nf\" (UniqueName: \"kubernetes.io/projected/4cd0af62-9469-44e9-a330-83bcaa079b79-kube-api-access-ls2nf\") pod \"nova-cell1-aa36-account-create-rkzt5\" (UID: \"4cd0af62-9469-44e9-a330-83bcaa079b79\") " pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.265804 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls2nf\" (UniqueName: \"kubernetes.io/projected/4cd0af62-9469-44e9-a330-83bcaa079b79-kube-api-access-ls2nf\") pod \"nova-cell1-aa36-account-create-rkzt5\" (UID: \"4cd0af62-9469-44e9-a330-83bcaa079b79\") " pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.480575 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.610929 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1b00-account-create-c2wmn"] Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.704931 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf8f-account-create-cqsdt"] Oct 08 08:13:55 crc kubenswrapper[4958]: I1008 08:13:55.922081 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-aa36-account-create-rkzt5"] Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.044063 4958 generic.go:334] "Generic (PLEG): container finished" podID="0195dd3c-dfc6-46a2-ba6f-54de2167410d" containerID="0505df006522a6a3a9ea2bf229b57de57076bc1fead80c3cde433e8c84c63aaf" exitCode=0 Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.044196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" event={"ID":"0195dd3c-dfc6-46a2-ba6f-54de2167410d","Type":"ContainerDied","Data":"0505df006522a6a3a9ea2bf229b57de57076bc1fead80c3cde433e8c84c63aaf"} Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.044294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" event={"ID":"0195dd3c-dfc6-46a2-ba6f-54de2167410d","Type":"ContainerStarted","Data":"c5cbfcf2f53e337e593ae57e4ad058ea17842a45c78d11f42ee74e948753e6f4"} Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.046617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aa36-account-create-rkzt5" event={"ID":"4cd0af62-9469-44e9-a330-83bcaa079b79","Type":"ContainerStarted","Data":"1801f027fb5e144a27b554171dd267a36f971fc6cb85cf35dc9a614887782c51"} Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.048584 4958 generic.go:334] "Generic (PLEG): container finished" podID="48aeb23a-d59a-4ebe-9abf-82760945f25c" containerID="6df63c94af515efe1b133a31bd470dedb7637ceb741cf777158f608d33edc1e2" exitCode=0 Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.048630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1b00-account-create-c2wmn" event={"ID":"48aeb23a-d59a-4ebe-9abf-82760945f25c","Type":"ContainerDied","Data":"6df63c94af515efe1b133a31bd470dedb7637ceb741cf777158f608d33edc1e2"} Oct 08 08:13:56 crc kubenswrapper[4958]: I1008 08:13:56.048654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1b00-account-create-c2wmn" event={"ID":"48aeb23a-d59a-4ebe-9abf-82760945f25c","Type":"ContainerStarted","Data":"a826cbe4fa623147bf02a1ef2f37b066292d5e56c1797593f79ca86750c6d0d8"} Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.062599 4958 generic.go:334] "Generic (PLEG): container finished" podID="4cd0af62-9469-44e9-a330-83bcaa079b79" containerID="bb295622c3cc249b100557163f1bc96534a129c27069f11ff8699be17efb8ac4" exitCode=0 Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.062672 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aa36-account-create-rkzt5" event={"ID":"4cd0af62-9469-44e9-a330-83bcaa079b79","Type":"ContainerDied","Data":"bb295622c3cc249b100557163f1bc96534a129c27069f11ff8699be17efb8ac4"} Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.603269 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.607663 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.695349 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48slw\" (UniqueName: \"kubernetes.io/projected/48aeb23a-d59a-4ebe-9abf-82760945f25c-kube-api-access-48slw\") pod \"48aeb23a-d59a-4ebe-9abf-82760945f25c\" (UID: \"48aeb23a-d59a-4ebe-9abf-82760945f25c\") " Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.695643 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s94tm\" (UniqueName: \"kubernetes.io/projected/0195dd3c-dfc6-46a2-ba6f-54de2167410d-kube-api-access-s94tm\") pod \"0195dd3c-dfc6-46a2-ba6f-54de2167410d\" (UID: \"0195dd3c-dfc6-46a2-ba6f-54de2167410d\") " Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.700451 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48aeb23a-d59a-4ebe-9abf-82760945f25c-kube-api-access-48slw" (OuterVolumeSpecName: "kube-api-access-48slw") pod "48aeb23a-d59a-4ebe-9abf-82760945f25c" (UID: "48aeb23a-d59a-4ebe-9abf-82760945f25c"). InnerVolumeSpecName "kube-api-access-48slw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.700767 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0195dd3c-dfc6-46a2-ba6f-54de2167410d-kube-api-access-s94tm" (OuterVolumeSpecName: "kube-api-access-s94tm") pod "0195dd3c-dfc6-46a2-ba6f-54de2167410d" (UID: "0195dd3c-dfc6-46a2-ba6f-54de2167410d"). InnerVolumeSpecName "kube-api-access-s94tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.797875 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48slw\" (UniqueName: \"kubernetes.io/projected/48aeb23a-d59a-4ebe-9abf-82760945f25c-kube-api-access-48slw\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:57 crc kubenswrapper[4958]: I1008 08:13:57.797934 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s94tm\" (UniqueName: \"kubernetes.io/projected/0195dd3c-dfc6-46a2-ba6f-54de2167410d-kube-api-access-s94tm\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.074077 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.074072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf8f-account-create-cqsdt" event={"ID":"0195dd3c-dfc6-46a2-ba6f-54de2167410d","Type":"ContainerDied","Data":"c5cbfcf2f53e337e593ae57e4ad058ea17842a45c78d11f42ee74e948753e6f4"} Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.074338 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cbfcf2f53e337e593ae57e4ad058ea17842a45c78d11f42ee74e948753e6f4" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.077385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1b00-account-create-c2wmn" event={"ID":"48aeb23a-d59a-4ebe-9abf-82760945f25c","Type":"ContainerDied","Data":"a826cbe4fa623147bf02a1ef2f37b066292d5e56c1797593f79ca86750c6d0d8"} Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.077451 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a826cbe4fa623147bf02a1ef2f37b066292d5e56c1797593f79ca86750c6d0d8" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.077404 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1b00-account-create-c2wmn" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.401160 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.516900 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls2nf\" (UniqueName: \"kubernetes.io/projected/4cd0af62-9469-44e9-a330-83bcaa079b79-kube-api-access-ls2nf\") pod \"4cd0af62-9469-44e9-a330-83bcaa079b79\" (UID: \"4cd0af62-9469-44e9-a330-83bcaa079b79\") " Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.524335 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd0af62-9469-44e9-a330-83bcaa079b79-kube-api-access-ls2nf" (OuterVolumeSpecName: "kube-api-access-ls2nf") pod "4cd0af62-9469-44e9-a330-83bcaa079b79" (UID: "4cd0af62-9469-44e9-a330-83bcaa079b79"). InnerVolumeSpecName "kube-api-access-ls2nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:13:58 crc kubenswrapper[4958]: I1008 08:13:58.620533 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls2nf\" (UniqueName: \"kubernetes.io/projected/4cd0af62-9469-44e9-a330-83bcaa079b79-kube-api-access-ls2nf\") on node \"crc\" DevicePath \"\"" Oct 08 08:13:59 crc kubenswrapper[4958]: I1008 08:13:59.092765 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-aa36-account-create-rkzt5" event={"ID":"4cd0af62-9469-44e9-a330-83bcaa079b79","Type":"ContainerDied","Data":"1801f027fb5e144a27b554171dd267a36f971fc6cb85cf35dc9a614887782c51"} Oct 08 08:13:59 crc kubenswrapper[4958]: I1008 08:13:59.092832 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1801f027fb5e144a27b554171dd267a36f971fc6cb85cf35dc9a614887782c51" Oct 08 08:13:59 crc kubenswrapper[4958]: I1008 08:13:59.092911 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-aa36-account-create-rkzt5" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.108252 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pd8b2"] Oct 08 08:14:00 crc kubenswrapper[4958]: E1008 08:14:00.108716 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0195dd3c-dfc6-46a2-ba6f-54de2167410d" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.108731 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0195dd3c-dfc6-46a2-ba6f-54de2167410d" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: E1008 08:14:00.108773 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aeb23a-d59a-4ebe-9abf-82760945f25c" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.108781 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aeb23a-d59a-4ebe-9abf-82760945f25c" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: E1008 08:14:00.108800 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd0af62-9469-44e9-a330-83bcaa079b79" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.108808 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd0af62-9469-44e9-a330-83bcaa079b79" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.109034 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0195dd3c-dfc6-46a2-ba6f-54de2167410d" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.110129 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="48aeb23a-d59a-4ebe-9abf-82760945f25c" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.110203 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd0af62-9469-44e9-a330-83bcaa079b79" containerName="mariadb-account-create" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.111045 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.113546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.113671 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w9jtv" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.123568 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.123941 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pd8b2"] Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.152967 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-config-data\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.153081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.153134 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-scripts\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.153245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxmq\" (UniqueName: \"kubernetes.io/projected/d74c2a5c-adb7-4296-8605-a6e45a39c494-kube-api-access-6vxmq\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.254895 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-config-data\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.255003 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.255068 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-scripts\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.255175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxmq\" (UniqueName: \"kubernetes.io/projected/d74c2a5c-adb7-4296-8605-a6e45a39c494-kube-api-access-6vxmq\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.260537 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-scripts\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.260895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.263081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-config-data\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.282734 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxmq\" (UniqueName: \"kubernetes.io/projected/d74c2a5c-adb7-4296-8605-a6e45a39c494-kube-api-access-6vxmq\") pod \"nova-cell0-conductor-db-sync-pd8b2\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.428015 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:00 crc kubenswrapper[4958]: W1008 08:14:00.942770 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd74c2a5c_adb7_4296_8605_a6e45a39c494.slice/crio-4a5bbcbec71deccda36d962d329d6edaf566f41dd1eb4ef9ae1d8cf7728d0d77 WatchSource:0}: Error finding container 4a5bbcbec71deccda36d962d329d6edaf566f41dd1eb4ef9ae1d8cf7728d0d77: Status 404 returned error can't find the container with id 4a5bbcbec71deccda36d962d329d6edaf566f41dd1eb4ef9ae1d8cf7728d0d77 Oct 08 08:14:00 crc kubenswrapper[4958]: I1008 08:14:00.943247 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pd8b2"] Oct 08 08:14:01 crc kubenswrapper[4958]: I1008 08:14:01.112180 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" event={"ID":"d74c2a5c-adb7-4296-8605-a6e45a39c494","Type":"ContainerStarted","Data":"4a5bbcbec71deccda36d962d329d6edaf566f41dd1eb4ef9ae1d8cf7728d0d77"} Oct 08 08:14:02 crc kubenswrapper[4958]: I1008 08:14:02.126720 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" event={"ID":"d74c2a5c-adb7-4296-8605-a6e45a39c494","Type":"ContainerStarted","Data":"73f61f13c4b8b3fea109ae0994f246675e6c6e877a831c35356e341ed7b201d5"} Oct 08 08:14:02 crc kubenswrapper[4958]: I1008 08:14:02.157055 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" podStartSLOduration=2.157020094 podStartE2EDuration="2.157020094s" podCreationTimestamp="2025-10-08 08:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:02.149342796 +0000 UTC m=+5985.279035407" watchObservedRunningTime="2025-10-08 08:14:02.157020094 +0000 UTC m=+5985.286712765" Oct 08 08:14:06 crc kubenswrapper[4958]: I1008 08:14:06.844734 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:14:06 crc kubenswrapper[4958]: I1008 08:14:06.845341 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:14:07 crc kubenswrapper[4958]: I1008 08:14:07.178621 4958 generic.go:334] "Generic (PLEG): container finished" podID="d74c2a5c-adb7-4296-8605-a6e45a39c494" containerID="73f61f13c4b8b3fea109ae0994f246675e6c6e877a831c35356e341ed7b201d5" exitCode=0 Oct 08 08:14:07 crc kubenswrapper[4958]: I1008 08:14:07.178665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" event={"ID":"d74c2a5c-adb7-4296-8605-a6e45a39c494","Type":"ContainerDied","Data":"73f61f13c4b8b3fea109ae0994f246675e6c6e877a831c35356e341ed7b201d5"} Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.574372 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.659826 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-scripts\") pod \"d74c2a5c-adb7-4296-8605-a6e45a39c494\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.659899 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vxmq\" (UniqueName: \"kubernetes.io/projected/d74c2a5c-adb7-4296-8605-a6e45a39c494-kube-api-access-6vxmq\") pod \"d74c2a5c-adb7-4296-8605-a6e45a39c494\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.659993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-config-data\") pod \"d74c2a5c-adb7-4296-8605-a6e45a39c494\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.660089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-combined-ca-bundle\") pod \"d74c2a5c-adb7-4296-8605-a6e45a39c494\" (UID: \"d74c2a5c-adb7-4296-8605-a6e45a39c494\") " Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.669300 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-scripts" (OuterVolumeSpecName: "scripts") pod "d74c2a5c-adb7-4296-8605-a6e45a39c494" (UID: "d74c2a5c-adb7-4296-8605-a6e45a39c494"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.669421 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d74c2a5c-adb7-4296-8605-a6e45a39c494-kube-api-access-6vxmq" (OuterVolumeSpecName: "kube-api-access-6vxmq") pod "d74c2a5c-adb7-4296-8605-a6e45a39c494" (UID: "d74c2a5c-adb7-4296-8605-a6e45a39c494"). InnerVolumeSpecName "kube-api-access-6vxmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.687294 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-config-data" (OuterVolumeSpecName: "config-data") pod "d74c2a5c-adb7-4296-8605-a6e45a39c494" (UID: "d74c2a5c-adb7-4296-8605-a6e45a39c494"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.692228 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d74c2a5c-adb7-4296-8605-a6e45a39c494" (UID: "d74c2a5c-adb7-4296-8605-a6e45a39c494"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.763882 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.763928 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vxmq\" (UniqueName: \"kubernetes.io/projected/d74c2a5c-adb7-4296-8605-a6e45a39c494-kube-api-access-6vxmq\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.763952 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:08 crc kubenswrapper[4958]: I1008 08:14:08.763987 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d74c2a5c-adb7-4296-8605-a6e45a39c494-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.210227 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" event={"ID":"d74c2a5c-adb7-4296-8605-a6e45a39c494","Type":"ContainerDied","Data":"4a5bbcbec71deccda36d962d329d6edaf566f41dd1eb4ef9ae1d8cf7728d0d77"} Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.210295 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a5bbcbec71deccda36d962d329d6edaf566f41dd1eb4ef9ae1d8cf7728d0d77" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.210315 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pd8b2" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.314849 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 08:14:09 crc kubenswrapper[4958]: E1008 08:14:09.315600 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d74c2a5c-adb7-4296-8605-a6e45a39c494" containerName="nova-cell0-conductor-db-sync" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.315628 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d74c2a5c-adb7-4296-8605-a6e45a39c494" containerName="nova-cell0-conductor-db-sync" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.315903 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d74c2a5c-adb7-4296-8605-a6e45a39c494" containerName="nova-cell0-conductor-db-sync" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.316993 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.321846 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w9jtv" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.324768 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.331217 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.376760 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.376813 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.376845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mpg\" (UniqueName: \"kubernetes.io/projected/e9c47799-3381-43c2-85bd-d8e7c454560e-kube-api-access-88mpg\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.410024 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hm6mj"] Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.412523 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.427317 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hm6mj"] Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.478472 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.478524 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.478559 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mpg\" (UniqueName: \"kubernetes.io/projected/e9c47799-3381-43c2-85bd-d8e7c454560e-kube-api-access-88mpg\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.478616 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxb5f\" (UniqueName: \"kubernetes.io/projected/134617e0-00ae-41f6-a248-1b7932fc616a-kube-api-access-vxb5f\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.479092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-utilities\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.479184 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-catalog-content\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.485167 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.486684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.499244 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mpg\" (UniqueName: \"kubernetes.io/projected/e9c47799-3381-43c2-85bd-d8e7c454560e-kube-api-access-88mpg\") pod \"nova-cell0-conductor-0\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.581729 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-utilities\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.582150 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-catalog-content\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.582296 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxb5f\" (UniqueName: \"kubernetes.io/projected/134617e0-00ae-41f6-a248-1b7932fc616a-kube-api-access-vxb5f\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.583061 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-utilities\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.583216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-catalog-content\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.605181 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxb5f\" (UniqueName: \"kubernetes.io/projected/134617e0-00ae-41f6-a248-1b7932fc616a-kube-api-access-vxb5f\") pod \"certified-operators-hm6mj\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.643217 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:09 crc kubenswrapper[4958]: I1008 08:14:09.730922 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:10 crc kubenswrapper[4958]: I1008 08:14:10.137660 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 08:14:10 crc kubenswrapper[4958]: I1008 08:14:10.241734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9c47799-3381-43c2-85bd-d8e7c454560e","Type":"ContainerStarted","Data":"1469e3f5f6a2acb3b56d487fa27033e78780c6d086f11bab0d7a13960aaf5f8f"} Oct 08 08:14:10 crc kubenswrapper[4958]: I1008 08:14:10.336848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hm6mj"] Oct 08 08:14:10 crc kubenswrapper[4958]: W1008 08:14:10.342158 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134617e0_00ae_41f6_a248_1b7932fc616a.slice/crio-99d91af17f36844c79c15cc807698138c6ec3d5a0073d59e22bb2b8f5322783c WatchSource:0}: Error finding container 99d91af17f36844c79c15cc807698138c6ec3d5a0073d59e22bb2b8f5322783c: Status 404 returned error can't find the container with id 99d91af17f36844c79c15cc807698138c6ec3d5a0073d59e22bb2b8f5322783c Oct 08 08:14:11 crc kubenswrapper[4958]: I1008 08:14:11.253576 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9c47799-3381-43c2-85bd-d8e7c454560e","Type":"ContainerStarted","Data":"7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8"} Oct 08 08:14:11 crc kubenswrapper[4958]: I1008 08:14:11.253912 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:11 crc kubenswrapper[4958]: I1008 08:14:11.267929 4958 generic.go:334] "Generic (PLEG): container finished" podID="134617e0-00ae-41f6-a248-1b7932fc616a" containerID="ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800" exitCode=0 Oct 08 08:14:11 crc kubenswrapper[4958]: I1008 08:14:11.268067 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerDied","Data":"ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800"} Oct 08 08:14:11 crc kubenswrapper[4958]: I1008 08:14:11.268112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerStarted","Data":"99d91af17f36844c79c15cc807698138c6ec3d5a0073d59e22bb2b8f5322783c"} Oct 08 08:14:11 crc kubenswrapper[4958]: I1008 08:14:11.304003 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.303979275 podStartE2EDuration="2.303979275s" podCreationTimestamp="2025-10-08 08:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:11.292116923 +0000 UTC m=+5994.421809544" watchObservedRunningTime="2025-10-08 08:14:11.303979275 +0000 UTC m=+5994.433671896" Oct 08 08:14:12 crc kubenswrapper[4958]: I1008 08:14:12.279639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerStarted","Data":"3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a"} Oct 08 08:14:13 crc kubenswrapper[4958]: I1008 08:14:13.292859 4958 generic.go:334] "Generic (PLEG): container finished" podID="134617e0-00ae-41f6-a248-1b7932fc616a" containerID="3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a" exitCode=0 Oct 08 08:14:13 crc kubenswrapper[4958]: I1008 08:14:13.292968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerDied","Data":"3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a"} Oct 08 08:14:14 crc kubenswrapper[4958]: I1008 08:14:14.306585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerStarted","Data":"84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba"} Oct 08 08:14:14 crc kubenswrapper[4958]: I1008 08:14:14.336661 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hm6mj" podStartSLOduration=2.891545196 podStartE2EDuration="5.336640423s" podCreationTimestamp="2025-10-08 08:14:09 +0000 UTC" firstStartedPulling="2025-10-08 08:14:11.271147425 +0000 UTC m=+5994.400840036" lastFinishedPulling="2025-10-08 08:14:13.716242622 +0000 UTC m=+5996.845935263" observedRunningTime="2025-10-08 08:14:14.335355888 +0000 UTC m=+5997.465048489" watchObservedRunningTime="2025-10-08 08:14:14.336640423 +0000 UTC m=+5997.466333024" Oct 08 08:14:19 crc kubenswrapper[4958]: I1008 08:14:19.693614 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 08 08:14:19 crc kubenswrapper[4958]: I1008 08:14:19.732028 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:19 crc kubenswrapper[4958]: I1008 08:14:19.732620 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:19 crc kubenswrapper[4958]: I1008 08:14:19.810350 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.306914 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rqcvh"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.308538 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.313341 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.313657 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.321500 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rqcvh"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.425150 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-scripts\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.425537 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxs8l\" (UniqueName: \"kubernetes.io/projected/8d5f7719-4d19-4a73-857c-574ba6d31f44-kube-api-access-jxs8l\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.425599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-config-data\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.425689 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.457810 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.460204 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.463844 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.473171 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.485524 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.514455 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.515806 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-config-data\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-scripts\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530418 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6t4\" (UniqueName: \"kubernetes.io/projected/75a18744-ea11-4ed0-a570-f09eafa6e405-kube-api-access-pp6t4\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530456 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxs8l\" (UniqueName: \"kubernetes.io/projected/8d5f7719-4d19-4a73-857c-574ba6d31f44-kube-api-access-jxs8l\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530483 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530516 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-config-data\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530522 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a18744-ea11-4ed0-a570-f09eafa6e405-logs\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.530594 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.542672 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.564154 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-scripts\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.565851 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-config-data\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.576261 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.586872 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxs8l\" (UniqueName: \"kubernetes.io/projected/8d5f7719-4d19-4a73-857c-574ba6d31f44-kube-api-access-jxs8l\") pod \"nova-cell0-cell-mapping-rqcvh\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.615916 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.617203 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.627378 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.631823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-config-data\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.631885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.631932 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6t4\" (UniqueName: \"kubernetes.io/projected/75a18744-ea11-4ed0-a570-f09eafa6e405-kube-api-access-pp6t4\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.631991 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.632055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a18744-ea11-4ed0-a570-f09eafa6e405-logs\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.632118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-logs\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.632157 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-config-data\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.632190 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfj9t\" (UniqueName: \"kubernetes.io/projected/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-kube-api-access-xfj9t\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.632964 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a18744-ea11-4ed0-a570-f09eafa6e405-logs\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.641262 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.643425 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.659822 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-config-data\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.674502 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.699449 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6t4\" (UniqueName: \"kubernetes.io/projected/75a18744-ea11-4ed0-a570-f09eafa6e405-kube-api-access-pp6t4\") pod \"nova-api-0\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.699839 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hm6mj"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.726424 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d96845bf9-55dwk"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.728223 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.733934 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-logs\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.733998 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6fdl\" (UniqueName: \"kubernetes.io/projected/64581546-f79e-46c2-8f52-d5634385bc00-kube-api-access-v6fdl\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.734027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfj9t\" (UniqueName: \"kubernetes.io/projected/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-kube-api-access-xfj9t\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.734048 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.734068 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.734133 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-config-data\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.734164 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.737939 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-logs\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.741823 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.742447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-config-data\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.760860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfj9t\" (UniqueName: \"kubernetes.io/projected/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-kube-api-access-xfj9t\") pod \"nova-metadata-0\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.760915 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.764042 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.772555 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d96845bf9-55dwk"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.779301 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.780855 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.784501 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-config-data\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844321 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-sb\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844341 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-dns-svc\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844376 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844411 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6fdl\" (UniqueName: \"kubernetes.io/projected/64581546-f79e-46c2-8f52-d5634385bc00-kube-api-access-v6fdl\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-config\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844447 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-nb\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844835 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s4pg\" (UniqueName: \"kubernetes.io/projected/492302e1-a488-4f72-9c6a-a6147340d970-kube-api-access-9s4pg\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844916 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.844964 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbzl\" (UniqueName: \"kubernetes.io/projected/8add678c-078e-4875-9347-bfc1dfbc09b9-kube-api-access-mbbzl\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.856358 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.858756 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.859097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6fdl\" (UniqueName: \"kubernetes.io/projected/64581546-f79e-46c2-8f52-d5634385bc00-kube-api-access-v6fdl\") pod \"nova-cell1-novncproxy-0\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.942373 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951582 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-config-data\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-sb\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951668 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-dns-svc\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951817 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-config\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-nb\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s4pg\" (UniqueName: \"kubernetes.io/projected/492302e1-a488-4f72-9c6a-a6147340d970-kube-api-access-9s4pg\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.951920 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbzl\" (UniqueName: \"kubernetes.io/projected/8add678c-078e-4875-9347-bfc1dfbc09b9-kube-api-access-mbbzl\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.954733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-nb\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.955317 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-config\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.958018 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-sb\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.958397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-config-data\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.961361 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.962667 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-dns-svc\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.968845 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbzl\" (UniqueName: \"kubernetes.io/projected/8add678c-078e-4875-9347-bfc1dfbc09b9-kube-api-access-mbbzl\") pod \"nova-scheduler-0\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " pod="openstack/nova-scheduler-0" Oct 08 08:14:20 crc kubenswrapper[4958]: I1008 08:14:20.973381 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s4pg\" (UniqueName: \"kubernetes.io/projected/492302e1-a488-4f72-9c6a-a6147340d970-kube-api-access-9s4pg\") pod \"dnsmasq-dns-7d96845bf9-55dwk\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.092441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.110501 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.125377 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.198798 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rqcvh"] Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.378488 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8w9dp"] Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.380149 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.382968 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.383176 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.385233 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.393529 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8w9dp"] Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.398097 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rqcvh" event={"ID":"8d5f7719-4d19-4a73-857c-574ba6d31f44","Type":"ContainerStarted","Data":"eb2a0b0eaefdc81178f0e610cfdcf15f49a4fb48ef46640c4ba889e0b179bd12"} Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.467039 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:21 crc kubenswrapper[4958]: W1008 08:14:21.498257 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d2d9cf4_f6f6_41a1_bad7_2466588c8a3d.slice/crio-50a681018b50c8dc839b56f4c949f9f8adafb4a95c74146d62e5c480f9960692 WatchSource:0}: Error finding container 50a681018b50c8dc839b56f4c949f9f8adafb4a95c74146d62e5c480f9960692: Status 404 returned error can't find the container with id 50a681018b50c8dc839b56f4c949f9f8adafb4a95c74146d62e5c480f9960692 Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.566023 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-config-data\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.566076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkkh\" (UniqueName: \"kubernetes.io/projected/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-kube-api-access-zwkkh\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.566115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-scripts\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.566220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.615686 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d96845bf9-55dwk"] Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.668607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-config-data\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.668669 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkkh\" (UniqueName: \"kubernetes.io/projected/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-kube-api-access-zwkkh\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.668690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-scripts\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.668767 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.671722 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-scripts\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.672199 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-config-data\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.677540 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.687554 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkkh\" (UniqueName: \"kubernetes.io/projected/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-kube-api-access-zwkkh\") pod \"nova-cell1-conductor-db-sync-8w9dp\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.726378 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.764663 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:21 crc kubenswrapper[4958]: I1008 08:14:21.786372 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:14:21 crc kubenswrapper[4958]: W1008 08:14:21.818058 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8add678c_078e_4875_9347_bfc1dfbc09b9.slice/crio-ec1d41d71ed6c0810f93ce3487585a63a0769c6f3ec09be1e62393d8a5040ba9 WatchSource:0}: Error finding container ec1d41d71ed6c0810f93ce3487585a63a0769c6f3ec09be1e62393d8a5040ba9: Status 404 returned error can't find the container with id ec1d41d71ed6c0810f93ce3487585a63a0769c6f3ec09be1e62393d8a5040ba9 Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.288424 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8w9dp"] Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.407586 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d","Type":"ContainerStarted","Data":"42c225a5c21d49e6294da657e4fa3af7063ab3aa7c6b88a018c8e5b46271eb6a"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.407677 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d","Type":"ContainerStarted","Data":"599b21c9fdedab1878ccfb52e2ac32b63753d639c292c044288c7d5e3da7950d"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.407697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d","Type":"ContainerStarted","Data":"50a681018b50c8dc839b56f4c949f9f8adafb4a95c74146d62e5c480f9960692"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.410009 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64581546-f79e-46c2-8f52-d5634385bc00","Type":"ContainerStarted","Data":"89bee769e5a4773573344d5ebd8054a78d79295858c85d7f33c2606c2594eef8"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.410140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64581546-f79e-46c2-8f52-d5634385bc00","Type":"ContainerStarted","Data":"f93f088bc897f2d4944412ee88789427d07d2f69f9c00e818097b9c1b477c122"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.412445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a18744-ea11-4ed0-a570-f09eafa6e405","Type":"ContainerStarted","Data":"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.412483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a18744-ea11-4ed0-a570-f09eafa6e405","Type":"ContainerStarted","Data":"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.412496 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a18744-ea11-4ed0-a570-f09eafa6e405","Type":"ContainerStarted","Data":"70446acbe14f39dde2488c3bb334d35cc204b0abc3b948ee3debecd5695f36fa"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.414260 4958 generic.go:334] "Generic (PLEG): container finished" podID="492302e1-a488-4f72-9c6a-a6147340d970" containerID="a0299c96d4d3d7747c379addefd3be950e3dec7630b729995d1db3f1c1377e63" exitCode=0 Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.414318 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" event={"ID":"492302e1-a488-4f72-9c6a-a6147340d970","Type":"ContainerDied","Data":"a0299c96d4d3d7747c379addefd3be950e3dec7630b729995d1db3f1c1377e63"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.414473 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" event={"ID":"492302e1-a488-4f72-9c6a-a6147340d970","Type":"ContainerStarted","Data":"eed1a96dbd5602db2af946f060972586a51fac9048f43a9f2676c2afd7f329aa"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.419530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8add678c-078e-4875-9347-bfc1dfbc09b9","Type":"ContainerStarted","Data":"53c04cbaa3b66ae24b49ba0dfcf77fd02995f8e5d1f1f79e647d1ed8c888e8ec"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.419581 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8add678c-078e-4875-9347-bfc1dfbc09b9","Type":"ContainerStarted","Data":"ec1d41d71ed6c0810f93ce3487585a63a0769c6f3ec09be1e62393d8a5040ba9"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.425429 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rqcvh" event={"ID":"8d5f7719-4d19-4a73-857c-574ba6d31f44","Type":"ContainerStarted","Data":"091d14067a0bf10ae12e0afdcf2678a428aa1ec0f4f033673ee8ceeb66d25343"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.426909 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" event={"ID":"1935d212-d2e9-4c81-b5ee-ab05ab45cf51","Type":"ContainerStarted","Data":"f01d920d4869c4c934d577f1db5b28093c03d8d5eb5ba2819ab74e8f1dfb057f"} Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.427074 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hm6mj" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="registry-server" containerID="cri-o://84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba" gracePeriod=2 Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.437001 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.436981603 podStartE2EDuration="2.436981603s" podCreationTimestamp="2025-10-08 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:22.434772103 +0000 UTC m=+6005.564464724" watchObservedRunningTime="2025-10-08 08:14:22.436981603 +0000 UTC m=+6005.566674204" Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.462463 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.462448033 podStartE2EDuration="2.462448033s" podCreationTimestamp="2025-10-08 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:22.45902771 +0000 UTC m=+6005.588720331" watchObservedRunningTime="2025-10-08 08:14:22.462448033 +0000 UTC m=+6005.592140634" Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.484372 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.484351236 podStartE2EDuration="2.484351236s" podCreationTimestamp="2025-10-08 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:22.477912152 +0000 UTC m=+6005.607604753" watchObservedRunningTime="2025-10-08 08:14:22.484351236 +0000 UTC m=+6005.614043837" Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.504209 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5041912440000003 podStartE2EDuration="2.504191244s" podCreationTimestamp="2025-10-08 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:22.496612628 +0000 UTC m=+6005.626305229" watchObservedRunningTime="2025-10-08 08:14:22.504191244 +0000 UTC m=+6005.633883845" Oct 08 08:14:22 crc kubenswrapper[4958]: I1008 08:14:22.579850 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rqcvh" podStartSLOduration=2.579834064 podStartE2EDuration="2.579834064s" podCreationTimestamp="2025-10-08 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:22.578292362 +0000 UTC m=+6005.707984963" watchObservedRunningTime="2025-10-08 08:14:22.579834064 +0000 UTC m=+6005.709526665" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.065230 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.132434 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-catalog-content\") pod \"134617e0-00ae-41f6-a248-1b7932fc616a\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.132701 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-utilities\") pod \"134617e0-00ae-41f6-a248-1b7932fc616a\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.132744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxb5f\" (UniqueName: \"kubernetes.io/projected/134617e0-00ae-41f6-a248-1b7932fc616a-kube-api-access-vxb5f\") pod \"134617e0-00ae-41f6-a248-1b7932fc616a\" (UID: \"134617e0-00ae-41f6-a248-1b7932fc616a\") " Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.133606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-utilities" (OuterVolumeSpecName: "utilities") pod "134617e0-00ae-41f6-a248-1b7932fc616a" (UID: "134617e0-00ae-41f6-a248-1b7932fc616a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.139912 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134617e0-00ae-41f6-a248-1b7932fc616a-kube-api-access-vxb5f" (OuterVolumeSpecName: "kube-api-access-vxb5f") pod "134617e0-00ae-41f6-a248-1b7932fc616a" (UID: "134617e0-00ae-41f6-a248-1b7932fc616a"). InnerVolumeSpecName "kube-api-access-vxb5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.182165 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "134617e0-00ae-41f6-a248-1b7932fc616a" (UID: "134617e0-00ae-41f6-a248-1b7932fc616a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.235493 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.235528 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134617e0-00ae-41f6-a248-1b7932fc616a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.235539 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxb5f\" (UniqueName: \"kubernetes.io/projected/134617e0-00ae-41f6-a248-1b7932fc616a-kube-api-access-vxb5f\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.435399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" event={"ID":"1935d212-d2e9-4c81-b5ee-ab05ab45cf51","Type":"ContainerStarted","Data":"465f0d12f0d50aab4eada26f0aa7c4269a49dc69b63986ec81b72dd028c7fe06"} Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.438856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" event={"ID":"492302e1-a488-4f72-9c6a-a6147340d970","Type":"ContainerStarted","Data":"1b92dbe901bdf178227d7d957751eca42ca7d34f7d41b372b825d297fdc4f642"} Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.439323 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.441470 4958 generic.go:334] "Generic (PLEG): container finished" podID="134617e0-00ae-41f6-a248-1b7932fc616a" containerID="84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba" exitCode=0 Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.442010 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hm6mj" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.444241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerDied","Data":"84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba"} Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.444550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hm6mj" event={"ID":"134617e0-00ae-41f6-a248-1b7932fc616a","Type":"ContainerDied","Data":"99d91af17f36844c79c15cc807698138c6ec3d5a0073d59e22bb2b8f5322783c"} Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.444728 4958 scope.go:117] "RemoveContainer" containerID="84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.453707 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" podStartSLOduration=2.453690514 podStartE2EDuration="2.453690514s" podCreationTimestamp="2025-10-08 08:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:23.450351583 +0000 UTC m=+6006.580044184" watchObservedRunningTime="2025-10-08 08:14:23.453690514 +0000 UTC m=+6006.583383115" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.483107 4958 scope.go:117] "RemoveContainer" containerID="3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.492261 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" podStartSLOduration=3.492244838 podStartE2EDuration="3.492244838s" podCreationTimestamp="2025-10-08 08:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:23.490659816 +0000 UTC m=+6006.620352417" watchObservedRunningTime="2025-10-08 08:14:23.492244838 +0000 UTC m=+6006.621937439" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.518231 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hm6mj"] Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.524811 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hm6mj"] Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.532639 4958 scope.go:117] "RemoveContainer" containerID="ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.586208 4958 scope.go:117] "RemoveContainer" containerID="84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba" Oct 08 08:14:23 crc kubenswrapper[4958]: E1008 08:14:23.587941 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba\": container with ID starting with 84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba not found: ID does not exist" containerID="84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.587979 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba"} err="failed to get container status \"84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba\": rpc error: code = NotFound desc = could not find container \"84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba\": container with ID starting with 84a9c9e6ca5efec8c84a3b4635cad1aeaf47b4ae3c1359efa0a9e3a4f55fa4ba not found: ID does not exist" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.588000 4958 scope.go:117] "RemoveContainer" containerID="3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a" Oct 08 08:14:23 crc kubenswrapper[4958]: E1008 08:14:23.588319 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a\": container with ID starting with 3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a not found: ID does not exist" containerID="3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.588338 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a"} err="failed to get container status \"3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a\": rpc error: code = NotFound desc = could not find container \"3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a\": container with ID starting with 3d0a4574e267fb35005d23230fae00c551e1e84fcfb711c6f6d67d5b50a2416a not found: ID does not exist" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.588351 4958 scope.go:117] "RemoveContainer" containerID="ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800" Oct 08 08:14:23 crc kubenswrapper[4958]: E1008 08:14:23.588626 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800\": container with ID starting with ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800 not found: ID does not exist" containerID="ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.588649 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800"} err="failed to get container status \"ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800\": rpc error: code = NotFound desc = could not find container \"ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800\": container with ID starting with ccadc291f0b3ffe1cbeab5a5d3f137c1c9a04d31edebbadfe736e200e02d0800 not found: ID does not exist" Oct 08 08:14:23 crc kubenswrapper[4958]: I1008 08:14:23.592820 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" path="/var/lib/kubelet/pods/134617e0-00ae-41f6-a248-1b7932fc616a/volumes" Oct 08 08:14:24 crc kubenswrapper[4958]: I1008 08:14:24.903923 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:24 crc kubenswrapper[4958]: I1008 08:14:24.904621 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-log" containerID="cri-o://599b21c9fdedab1878ccfb52e2ac32b63753d639c292c044288c7d5e3da7950d" gracePeriod=30 Oct 08 08:14:24 crc kubenswrapper[4958]: I1008 08:14:24.904817 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-metadata" containerID="cri-o://42c225a5c21d49e6294da657e4fa3af7063ab3aa7c6b88a018c8e5b46271eb6a" gracePeriod=30 Oct 08 08:14:24 crc kubenswrapper[4958]: I1008 08:14:24.925221 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:24 crc kubenswrapper[4958]: I1008 08:14:24.925433 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="64581546-f79e-46c2-8f52-d5634385bc00" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://89bee769e5a4773573344d5ebd8054a78d79295858c85d7f33c2606c2594eef8" gracePeriod=30 Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.483751 4958 generic.go:334] "Generic (PLEG): container finished" podID="64581546-f79e-46c2-8f52-d5634385bc00" containerID="89bee769e5a4773573344d5ebd8054a78d79295858c85d7f33c2606c2594eef8" exitCode=0 Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.484251 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64581546-f79e-46c2-8f52-d5634385bc00","Type":"ContainerDied","Data":"89bee769e5a4773573344d5ebd8054a78d79295858c85d7f33c2606c2594eef8"} Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.486257 4958 generic.go:334] "Generic (PLEG): container finished" podID="1935d212-d2e9-4c81-b5ee-ab05ab45cf51" containerID="465f0d12f0d50aab4eada26f0aa7c4269a49dc69b63986ec81b72dd028c7fe06" exitCode=0 Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.486356 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" event={"ID":"1935d212-d2e9-4c81-b5ee-ab05ab45cf51","Type":"ContainerDied","Data":"465f0d12f0d50aab4eada26f0aa7c4269a49dc69b63986ec81b72dd028c7fe06"} Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.490054 4958 generic.go:334] "Generic (PLEG): container finished" podID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerID="42c225a5c21d49e6294da657e4fa3af7063ab3aa7c6b88a018c8e5b46271eb6a" exitCode=0 Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.490080 4958 generic.go:334] "Generic (PLEG): container finished" podID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerID="599b21c9fdedab1878ccfb52e2ac32b63753d639c292c044288c7d5e3da7950d" exitCode=143 Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.490094 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d","Type":"ContainerDied","Data":"42c225a5c21d49e6294da657e4fa3af7063ab3aa7c6b88a018c8e5b46271eb6a"} Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.490141 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d","Type":"ContainerDied","Data":"599b21c9fdedab1878ccfb52e2ac32b63753d639c292c044288c7d5e3da7950d"} Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.490154 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d","Type":"ContainerDied","Data":"50a681018b50c8dc839b56f4c949f9f8adafb4a95c74146d62e5c480f9960692"} Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.490164 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a681018b50c8dc839b56f4c949f9f8adafb4a95c74146d62e5c480f9960692" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.515175 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.576993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-logs\") pod \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.577066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-config-data\") pod \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.577188 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-combined-ca-bundle\") pod \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.577212 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfj9t\" (UniqueName: \"kubernetes.io/projected/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-kube-api-access-xfj9t\") pod \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\" (UID: \"4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.578663 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-logs" (OuterVolumeSpecName: "logs") pod "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" (UID: "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.586390 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-kube-api-access-xfj9t" (OuterVolumeSpecName: "kube-api-access-xfj9t") pod "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" (UID: "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d"). InnerVolumeSpecName "kube-api-access-xfj9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.586432 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.619273 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-config-data" (OuterVolumeSpecName: "config-data") pod "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" (UID: "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.619574 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" (UID: "4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.679595 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-combined-ca-bundle\") pod \"64581546-f79e-46c2-8f52-d5634385bc00\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.679725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-config-data\") pod \"64581546-f79e-46c2-8f52-d5634385bc00\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.679764 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6fdl\" (UniqueName: \"kubernetes.io/projected/64581546-f79e-46c2-8f52-d5634385bc00-kube-api-access-v6fdl\") pod \"64581546-f79e-46c2-8f52-d5634385bc00\" (UID: \"64581546-f79e-46c2-8f52-d5634385bc00\") " Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.680862 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.680882 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.680891 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.680930 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfj9t\" (UniqueName: \"kubernetes.io/projected/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d-kube-api-access-xfj9t\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.683085 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64581546-f79e-46c2-8f52-d5634385bc00-kube-api-access-v6fdl" (OuterVolumeSpecName: "kube-api-access-v6fdl") pod "64581546-f79e-46c2-8f52-d5634385bc00" (UID: "64581546-f79e-46c2-8f52-d5634385bc00"). InnerVolumeSpecName "kube-api-access-v6fdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.705709 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-config-data" (OuterVolumeSpecName: "config-data") pod "64581546-f79e-46c2-8f52-d5634385bc00" (UID: "64581546-f79e-46c2-8f52-d5634385bc00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.707965 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64581546-f79e-46c2-8f52-d5634385bc00" (UID: "64581546-f79e-46c2-8f52-d5634385bc00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.783340 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.783387 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64581546-f79e-46c2-8f52-d5634385bc00-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:25 crc kubenswrapper[4958]: I1008 08:14:25.783406 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6fdl\" (UniqueName: \"kubernetes.io/projected/64581546-f79e-46c2-8f52-d5634385bc00-kube-api-access-v6fdl\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.126197 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.506624 4958 generic.go:334] "Generic (PLEG): container finished" podID="8d5f7719-4d19-4a73-857c-574ba6d31f44" containerID="091d14067a0bf10ae12e0afdcf2678a428aa1ec0f4f033673ee8ceeb66d25343" exitCode=0 Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.506741 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rqcvh" event={"ID":"8d5f7719-4d19-4a73-857c-574ba6d31f44","Type":"ContainerDied","Data":"091d14067a0bf10ae12e0afdcf2678a428aa1ec0f4f033673ee8ceeb66d25343"} Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.510065 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.510056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64581546-f79e-46c2-8f52-d5634385bc00","Type":"ContainerDied","Data":"f93f088bc897f2d4944412ee88789427d07d2f69f9c00e818097b9c1b477c122"} Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.510228 4958 scope.go:117] "RemoveContainer" containerID="89bee769e5a4773573344d5ebd8054a78d79295858c85d7f33c2606c2594eef8" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.510565 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.618024 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.630687 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.645274 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.657573 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: E1008 08:14:26.657923 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="extract-content" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.657938 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="extract-content" Oct 08 08:14:26 crc kubenswrapper[4958]: E1008 08:14:26.657963 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64581546-f79e-46c2-8f52-d5634385bc00" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.657970 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="64581546-f79e-46c2-8f52-d5634385bc00" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 08:14:26 crc kubenswrapper[4958]: E1008 08:14:26.657992 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-log" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658000 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-log" Oct 08 08:14:26 crc kubenswrapper[4958]: E1008 08:14:26.658012 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="registry-server" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658018 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="registry-server" Oct 08 08:14:26 crc kubenswrapper[4958]: E1008 08:14:26.658037 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-metadata" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658042 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-metadata" Oct 08 08:14:26 crc kubenswrapper[4958]: E1008 08:14:26.658056 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="extract-utilities" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658061 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="extract-utilities" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658243 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-metadata" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658256 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="134617e0-00ae-41f6-a248-1b7932fc616a" containerName="registry-server" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658267 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" containerName="nova-metadata-log" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.658278 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="64581546-f79e-46c2-8f52-d5634385bc00" containerName="nova-cell1-novncproxy-novncproxy" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.659189 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.662242 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.662623 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.666793 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.676592 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.694849 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.697191 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.700195 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.700466 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701528 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701594 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701617 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4126f380-1914-4db5-a401-ddb4aaca8bff-logs\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701683 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701705 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701733 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ksb\" (UniqueName: \"kubernetes.io/projected/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-kube-api-access-n8ksb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701781 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-config-data\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgmwn\" (UniqueName: \"kubernetes.io/projected/4126f380-1914-4db5-a401-ddb4aaca8bff-kube-api-access-tgmwn\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701861 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.701922 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.705376 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.803911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.803995 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804032 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ksb\" (UniqueName: \"kubernetes.io/projected/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-kube-api-access-n8ksb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804093 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-config-data\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804141 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgmwn\" (UniqueName: \"kubernetes.io/projected/4126f380-1914-4db5-a401-ddb4aaca8bff-kube-api-access-tgmwn\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804315 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804338 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4126f380-1914-4db5-a401-ddb4aaca8bff-logs\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.804803 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4126f380-1914-4db5-a401-ddb4aaca8bff-logs\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.809990 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.810340 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.810731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.811493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-config-data\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.813151 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.814866 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.815824 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.822464 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ksb\" (UniqueName: \"kubernetes.io/projected/fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0-kube-api-access-n8ksb\") pod \"nova-cell1-novncproxy-0\" (UID: \"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0\") " pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.822616 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgmwn\" (UniqueName: \"kubernetes.io/projected/4126f380-1914-4db5-a401-ddb4aaca8bff-kube-api-access-tgmwn\") pod \"nova-metadata-0\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " pod="openstack/nova-metadata-0" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.940767 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:26 crc kubenswrapper[4958]: I1008 08:14:26.977892 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.017366 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.110373 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-config-data\") pod \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.110780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-combined-ca-bundle\") pod \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.110825 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-scripts\") pod \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.110875 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwkkh\" (UniqueName: \"kubernetes.io/projected/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-kube-api-access-zwkkh\") pod \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\" (UID: \"1935d212-d2e9-4c81-b5ee-ab05ab45cf51\") " Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.116833 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-scripts" (OuterVolumeSpecName: "scripts") pod "1935d212-d2e9-4c81-b5ee-ab05ab45cf51" (UID: "1935d212-d2e9-4c81-b5ee-ab05ab45cf51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.118183 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-kube-api-access-zwkkh" (OuterVolumeSpecName: "kube-api-access-zwkkh") pod "1935d212-d2e9-4c81-b5ee-ab05ab45cf51" (UID: "1935d212-d2e9-4c81-b5ee-ab05ab45cf51"). InnerVolumeSpecName "kube-api-access-zwkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.155311 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1935d212-d2e9-4c81-b5ee-ab05ab45cf51" (UID: "1935d212-d2e9-4c81-b5ee-ab05ab45cf51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.160126 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-config-data" (OuterVolumeSpecName: "config-data") pod "1935d212-d2e9-4c81-b5ee-ab05ab45cf51" (UID: "1935d212-d2e9-4c81-b5ee-ab05ab45cf51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.213879 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.213928 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.213959 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.213973 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwkkh\" (UniqueName: \"kubernetes.io/projected/1935d212-d2e9-4c81-b5ee-ab05ab45cf51-kube-api-access-zwkkh\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.462880 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:27 crc kubenswrapper[4958]: W1008 08:14:27.471541 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4126f380_1914_4db5_a401_ddb4aaca8bff.slice/crio-5bc9e403c488bf10166772ecbb331fd29f514d28b16d6e75fd6192daa6863209 WatchSource:0}: Error finding container 5bc9e403c488bf10166772ecbb331fd29f514d28b16d6e75fd6192daa6863209: Status 404 returned error can't find the container with id 5bc9e403c488bf10166772ecbb331fd29f514d28b16d6e75fd6192daa6863209 Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.527085 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" event={"ID":"1935d212-d2e9-4c81-b5ee-ab05ab45cf51","Type":"ContainerDied","Data":"f01d920d4869c4c934d577f1db5b28093c03d8d5eb5ba2819ab74e8f1dfb057f"} Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.527503 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f01d920d4869c4c934d577f1db5b28093c03d8d5eb5ba2819ab74e8f1dfb057f" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.527567 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8w9dp" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.540776 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4126f380-1914-4db5-a401-ddb4aaca8bff","Type":"ContainerStarted","Data":"5bc9e403c488bf10166772ecbb331fd29f514d28b16d6e75fd6192daa6863209"} Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.559658 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.597783 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d" path="/var/lib/kubelet/pods/4d2d9cf4-f6f6-41a1-bad7-2466588c8a3d/volumes" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.598404 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64581546-f79e-46c2-8f52-d5634385bc00" path="/var/lib/kubelet/pods/64581546-f79e-46c2-8f52-d5634385bc00/volumes" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.604121 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 08:14:27 crc kubenswrapper[4958]: E1008 08:14:27.604584 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1935d212-d2e9-4c81-b5ee-ab05ab45cf51" containerName="nova-cell1-conductor-db-sync" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.604600 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1935d212-d2e9-4c81-b5ee-ab05ab45cf51" containerName="nova-cell1-conductor-db-sync" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.604807 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1935d212-d2e9-4c81-b5ee-ab05ab45cf51" containerName="nova-cell1-conductor-db-sync" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.607601 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.613021 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.616264 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.632693 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.632806 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.632841 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx2cn\" (UniqueName: \"kubernetes.io/projected/873281d6-2822-4443-9128-6db80f5440b1-kube-api-access-sx2cn\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.734711 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.734754 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx2cn\" (UniqueName: \"kubernetes.io/projected/873281d6-2822-4443-9128-6db80f5440b1-kube-api-access-sx2cn\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.734849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.740979 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.741546 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.755042 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx2cn\" (UniqueName: \"kubernetes.io/projected/873281d6-2822-4443-9128-6db80f5440b1-kube-api-access-sx2cn\") pod \"nova-cell1-conductor-0\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.851571 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:27 crc kubenswrapper[4958]: I1008 08:14:27.944586 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.040531 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-combined-ca-bundle\") pod \"8d5f7719-4d19-4a73-857c-574ba6d31f44\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.040879 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-scripts\") pod \"8d5f7719-4d19-4a73-857c-574ba6d31f44\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.041025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxs8l\" (UniqueName: \"kubernetes.io/projected/8d5f7719-4d19-4a73-857c-574ba6d31f44-kube-api-access-jxs8l\") pod \"8d5f7719-4d19-4a73-857c-574ba6d31f44\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.041056 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-config-data\") pod \"8d5f7719-4d19-4a73-857c-574ba6d31f44\" (UID: \"8d5f7719-4d19-4a73-857c-574ba6d31f44\") " Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.047209 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-scripts" (OuterVolumeSpecName: "scripts") pod "8d5f7719-4d19-4a73-857c-574ba6d31f44" (UID: "8d5f7719-4d19-4a73-857c-574ba6d31f44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.048084 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5f7719-4d19-4a73-857c-574ba6d31f44-kube-api-access-jxs8l" (OuterVolumeSpecName: "kube-api-access-jxs8l") pod "8d5f7719-4d19-4a73-857c-574ba6d31f44" (UID: "8d5f7719-4d19-4a73-857c-574ba6d31f44"). InnerVolumeSpecName "kube-api-access-jxs8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.084047 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d5f7719-4d19-4a73-857c-574ba6d31f44" (UID: "8d5f7719-4d19-4a73-857c-574ba6d31f44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.087310 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-config-data" (OuterVolumeSpecName: "config-data") pod "8d5f7719-4d19-4a73-857c-574ba6d31f44" (UID: "8d5f7719-4d19-4a73-857c-574ba6d31f44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.143799 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.143832 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxs8l\" (UniqueName: \"kubernetes.io/projected/8d5f7719-4d19-4a73-857c-574ba6d31f44-kube-api-access-jxs8l\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.143843 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.143853 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d5f7719-4d19-4a73-857c-574ba6d31f44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.392307 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.552173 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"873281d6-2822-4443-9128-6db80f5440b1","Type":"ContainerStarted","Data":"b2ad372ea1469b5d4661083d9efef77d7590b3805197a3ccb7ef9771a349fcc6"} Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.556373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0","Type":"ContainerStarted","Data":"d0089e5dbdb172e949a1a8ddb27014dafc697ed72200e0b70350a1563b0dbb0e"} Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.556566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0","Type":"ContainerStarted","Data":"14e6fc606a8ff5daad9d5bc4946f6d41c45526e4441445179012cb0f02b4f321"} Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.561050 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4126f380-1914-4db5-a401-ddb4aaca8bff","Type":"ContainerStarted","Data":"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d"} Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.561221 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4126f380-1914-4db5-a401-ddb4aaca8bff","Type":"ContainerStarted","Data":"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a"} Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.582868 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rqcvh" event={"ID":"8d5f7719-4d19-4a73-857c-574ba6d31f44","Type":"ContainerDied","Data":"eb2a0b0eaefdc81178f0e610cfdcf15f49a4fb48ef46640c4ba889e0b179bd12"} Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.582918 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb2a0b0eaefdc81178f0e610cfdcf15f49a4fb48ef46640c4ba889e0b179bd12" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.583089 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rqcvh" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.590045 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.590019097 podStartE2EDuration="2.590019097s" podCreationTimestamp="2025-10-08 08:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:28.579725748 +0000 UTC m=+6011.709418379" watchObservedRunningTime="2025-10-08 08:14:28.590019097 +0000 UTC m=+6011.719711708" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.623387 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.623369011 podStartE2EDuration="2.623369011s" podCreationTimestamp="2025-10-08 08:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:28.605428595 +0000 UTC m=+6011.735121196" watchObservedRunningTime="2025-10-08 08:14:28.623369011 +0000 UTC m=+6011.753061612" Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.726623 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.727487 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-log" containerID="cri-o://01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99" gracePeriod=30 Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.727646 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-api" containerID="cri-o://789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6" gracePeriod=30 Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.738439 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.738637 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8add678c-078e-4875-9347-bfc1dfbc09b9" containerName="nova-scheduler-scheduler" containerID="cri-o://53c04cbaa3b66ae24b49ba0dfcf77fd02995f8e5d1f1f79e647d1ed8c888e8ec" gracePeriod=30 Oct 08 08:14:28 crc kubenswrapper[4958]: I1008 08:14:28.782136 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.341065 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.385029 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp6t4\" (UniqueName: \"kubernetes.io/projected/75a18744-ea11-4ed0-a570-f09eafa6e405-kube-api-access-pp6t4\") pod \"75a18744-ea11-4ed0-a570-f09eafa6e405\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.385093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a18744-ea11-4ed0-a570-f09eafa6e405-logs\") pod \"75a18744-ea11-4ed0-a570-f09eafa6e405\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.385117 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-config-data\") pod \"75a18744-ea11-4ed0-a570-f09eafa6e405\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.385195 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-combined-ca-bundle\") pod \"75a18744-ea11-4ed0-a570-f09eafa6e405\" (UID: \"75a18744-ea11-4ed0-a570-f09eafa6e405\") " Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.385664 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a18744-ea11-4ed0-a570-f09eafa6e405-logs" (OuterVolumeSpecName: "logs") pod "75a18744-ea11-4ed0-a570-f09eafa6e405" (UID: "75a18744-ea11-4ed0-a570-f09eafa6e405"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.386573 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a18744-ea11-4ed0-a570-f09eafa6e405-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.391114 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a18744-ea11-4ed0-a570-f09eafa6e405-kube-api-access-pp6t4" (OuterVolumeSpecName: "kube-api-access-pp6t4") pod "75a18744-ea11-4ed0-a570-f09eafa6e405" (UID: "75a18744-ea11-4ed0-a570-f09eafa6e405"). InnerVolumeSpecName "kube-api-access-pp6t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.413266 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-config-data" (OuterVolumeSpecName: "config-data") pod "75a18744-ea11-4ed0-a570-f09eafa6e405" (UID: "75a18744-ea11-4ed0-a570-f09eafa6e405"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.415852 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a18744-ea11-4ed0-a570-f09eafa6e405" (UID: "75a18744-ea11-4ed0-a570-f09eafa6e405"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.488320 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.488357 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp6t4\" (UniqueName: \"kubernetes.io/projected/75a18744-ea11-4ed0-a570-f09eafa6e405-kube-api-access-pp6t4\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.488379 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a18744-ea11-4ed0-a570-f09eafa6e405-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.593185 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"873281d6-2822-4443-9128-6db80f5440b1","Type":"ContainerStarted","Data":"1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b"} Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.593335 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.594822 4958 generic.go:334] "Generic (PLEG): container finished" podID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerID="789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6" exitCode=0 Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.594880 4958 generic.go:334] "Generic (PLEG): container finished" podID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerID="01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99" exitCode=143 Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.594914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a18744-ea11-4ed0-a570-f09eafa6e405","Type":"ContainerDied","Data":"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6"} Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.594940 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a18744-ea11-4ed0-a570-f09eafa6e405","Type":"ContainerDied","Data":"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99"} Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.594967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75a18744-ea11-4ed0-a570-f09eafa6e405","Type":"ContainerDied","Data":"70446acbe14f39dde2488c3bb334d35cc204b0abc3b948ee3debecd5695f36fa"} Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.594992 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.595000 4958 scope.go:117] "RemoveContainer" containerID="789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.618848 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.618827745 podStartE2EDuration="2.618827745s" podCreationTimestamp="2025-10-08 08:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:29.615257218 +0000 UTC m=+6012.744949829" watchObservedRunningTime="2025-10-08 08:14:29.618827745 +0000 UTC m=+6012.748520346" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.649030 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.650614 4958 scope.go:117] "RemoveContainer" containerID="01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.658333 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.680040 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:29 crc kubenswrapper[4958]: E1008 08:14:29.680442 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5f7719-4d19-4a73-857c-574ba6d31f44" containerName="nova-manage" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.680457 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5f7719-4d19-4a73-857c-574ba6d31f44" containerName="nova-manage" Oct 08 08:14:29 crc kubenswrapper[4958]: E1008 08:14:29.680479 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-api" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.680486 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-api" Oct 08 08:14:29 crc kubenswrapper[4958]: E1008 08:14:29.680502 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-log" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.680508 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-log" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.681148 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5f7719-4d19-4a73-857c-574ba6d31f44" containerName="nova-manage" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.681170 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-log" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.681188 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" containerName="nova-api-api" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.683059 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.685048 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.698864 4958 scope.go:117] "RemoveContainer" containerID="789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6" Oct 08 08:14:29 crc kubenswrapper[4958]: E1008 08:14:29.706309 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6\": container with ID starting with 789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6 not found: ID does not exist" containerID="789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.706357 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6"} err="failed to get container status \"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6\": rpc error: code = NotFound desc = could not find container \"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6\": container with ID starting with 789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6 not found: ID does not exist" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.706386 4958 scope.go:117] "RemoveContainer" containerID="01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.707221 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:29 crc kubenswrapper[4958]: E1008 08:14:29.708785 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99\": container with ID starting with 01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99 not found: ID does not exist" containerID="01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.708816 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99"} err="failed to get container status \"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99\": rpc error: code = NotFound desc = could not find container \"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99\": container with ID starting with 01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99 not found: ID does not exist" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.708831 4958 scope.go:117] "RemoveContainer" containerID="789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.709138 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6"} err="failed to get container status \"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6\": rpc error: code = NotFound desc = could not find container \"789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6\": container with ID starting with 789e9c59a4b4f3c02fe8475114678f757c9fb54b6d477a029b19a3d2c039c1c6 not found: ID does not exist" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.709162 4958 scope.go:117] "RemoveContainer" containerID="01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.709598 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99"} err="failed to get container status \"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99\": rpc error: code = NotFound desc = could not find container \"01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99\": container with ID starting with 01901934e0726464c58bfb92112410a85e486b179f9c2d4624ad8a8c32084d99 not found: ID does not exist" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.812399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzhr\" (UniqueName: \"kubernetes.io/projected/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-kube-api-access-9gzhr\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.812744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-config-data\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.812857 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.813227 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-logs\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.914418 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-logs\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.914515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzhr\" (UniqueName: \"kubernetes.io/projected/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-kube-api-access-9gzhr\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.914868 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.915025 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-logs\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.914901 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-config-data\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.920870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.921098 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-config-data\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:29 crc kubenswrapper[4958]: I1008 08:14:29.932199 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzhr\" (UniqueName: \"kubernetes.io/projected/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-kube-api-access-9gzhr\") pod \"nova-api-0\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " pod="openstack/nova-api-0" Oct 08 08:14:30 crc kubenswrapper[4958]: I1008 08:14:30.002796 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:14:30 crc kubenswrapper[4958]: I1008 08:14:30.523063 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:30 crc kubenswrapper[4958]: W1008 08:14:30.530626 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff83f83_abb3_4a0b_a53f_d85c8c0ce6b7.slice/crio-0b6c46b9892d1ba98b4e65b465b3fc7f07161282b61c078802659a18a958eb9f WatchSource:0}: Error finding container 0b6c46b9892d1ba98b4e65b465b3fc7f07161282b61c078802659a18a958eb9f: Status 404 returned error can't find the container with id 0b6c46b9892d1ba98b4e65b465b3fc7f07161282b61c078802659a18a958eb9f Oct 08 08:14:30 crc kubenswrapper[4958]: I1008 08:14:30.605667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7","Type":"ContainerStarted","Data":"0b6c46b9892d1ba98b4e65b465b3fc7f07161282b61c078802659a18a958eb9f"} Oct 08 08:14:30 crc kubenswrapper[4958]: I1008 08:14:30.607404 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-log" containerID="cri-o://e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a" gracePeriod=30 Oct 08 08:14:30 crc kubenswrapper[4958]: I1008 08:14:30.607883 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-metadata" containerID="cri-o://8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d" gracePeriod=30 Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.113795 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.208034 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56ccf77c9-l66bq"] Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.218219 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerName="dnsmasq-dns" containerID="cri-o://0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9" gracePeriod=10 Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.241974 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.289642 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-config-data\") pod \"4126f380-1914-4db5-a401-ddb4aaca8bff\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.289697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-combined-ca-bundle\") pod \"4126f380-1914-4db5-a401-ddb4aaca8bff\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.289731 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4126f380-1914-4db5-a401-ddb4aaca8bff-logs\") pod \"4126f380-1914-4db5-a401-ddb4aaca8bff\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.289775 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgmwn\" (UniqueName: \"kubernetes.io/projected/4126f380-1914-4db5-a401-ddb4aaca8bff-kube-api-access-tgmwn\") pod \"4126f380-1914-4db5-a401-ddb4aaca8bff\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.289832 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-nova-metadata-tls-certs\") pod \"4126f380-1914-4db5-a401-ddb4aaca8bff\" (UID: \"4126f380-1914-4db5-a401-ddb4aaca8bff\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.290647 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4126f380-1914-4db5-a401-ddb4aaca8bff-logs" (OuterVolumeSpecName: "logs") pod "4126f380-1914-4db5-a401-ddb4aaca8bff" (UID: "4126f380-1914-4db5-a401-ddb4aaca8bff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.293762 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4126f380-1914-4db5-a401-ddb4aaca8bff-kube-api-access-tgmwn" (OuterVolumeSpecName: "kube-api-access-tgmwn") pod "4126f380-1914-4db5-a401-ddb4aaca8bff" (UID: "4126f380-1914-4db5-a401-ddb4aaca8bff"). InnerVolumeSpecName "kube-api-access-tgmwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.331116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4126f380-1914-4db5-a401-ddb4aaca8bff" (UID: "4126f380-1914-4db5-a401-ddb4aaca8bff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.336261 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-config-data" (OuterVolumeSpecName: "config-data") pod "4126f380-1914-4db5-a401-ddb4aaca8bff" (UID: "4126f380-1914-4db5-a401-ddb4aaca8bff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.348127 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4126f380-1914-4db5-a401-ddb4aaca8bff" (UID: "4126f380-1914-4db5-a401-ddb4aaca8bff"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.392182 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.392228 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4126f380-1914-4db5-a401-ddb4aaca8bff-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.392240 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgmwn\" (UniqueName: \"kubernetes.io/projected/4126f380-1914-4db5-a401-ddb4aaca8bff-kube-api-access-tgmwn\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.392250 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.392258 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4126f380-1914-4db5-a401-ddb4aaca8bff-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.609635 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a18744-ea11-4ed0-a570-f09eafa6e405" path="/var/lib/kubelet/pods/75a18744-ea11-4ed0-a570-f09eafa6e405/volumes" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.636298 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.638682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7","Type":"ContainerStarted","Data":"f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.638730 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7","Type":"ContainerStarted","Data":"a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647671 4958 generic.go:334] "Generic (PLEG): container finished" podID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerID="8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d" exitCode=0 Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647708 4958 generic.go:334] "Generic (PLEG): container finished" podID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerID="e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a" exitCode=143 Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647752 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4126f380-1914-4db5-a401-ddb4aaca8bff","Type":"ContainerDied","Data":"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4126f380-1914-4db5-a401-ddb4aaca8bff","Type":"ContainerDied","Data":"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4126f380-1914-4db5-a401-ddb4aaca8bff","Type":"ContainerDied","Data":"5bc9e403c488bf10166772ecbb331fd29f514d28b16d6e75fd6192daa6863209"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647816 4958 scope.go:117] "RemoveContainer" containerID="8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.647942 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.673403 4958 generic.go:334] "Generic (PLEG): container finished" podID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerID="0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9" exitCode=0 Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.673442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" event={"ID":"9be2f105-5dd5-432a-82b6-56410f76db3f","Type":"ContainerDied","Data":"0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.673468 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" event={"ID":"9be2f105-5dd5-432a-82b6-56410f76db3f","Type":"ContainerDied","Data":"f9fc889d98fd8c62405c3fee8c8cc25956e08b29136b58218245e8af926f60c9"} Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.673527 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56ccf77c9-l66bq" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.696491 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-config\") pod \"9be2f105-5dd5-432a-82b6-56410f76db3f\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.696576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-nb\") pod \"9be2f105-5dd5-432a-82b6-56410f76db3f\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.696615 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v4hc\" (UniqueName: \"kubernetes.io/projected/9be2f105-5dd5-432a-82b6-56410f76db3f-kube-api-access-4v4hc\") pod \"9be2f105-5dd5-432a-82b6-56410f76db3f\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.696723 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-dns-svc\") pod \"9be2f105-5dd5-432a-82b6-56410f76db3f\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.696774 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-sb\") pod \"9be2f105-5dd5-432a-82b6-56410f76db3f\" (UID: \"9be2f105-5dd5-432a-82b6-56410f76db3f\") " Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.701219 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be2f105-5dd5-432a-82b6-56410f76db3f-kube-api-access-4v4hc" (OuterVolumeSpecName: "kube-api-access-4v4hc") pod "9be2f105-5dd5-432a-82b6-56410f76db3f" (UID: "9be2f105-5dd5-432a-82b6-56410f76db3f"). InnerVolumeSpecName "kube-api-access-4v4hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.701547 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.701525762 podStartE2EDuration="2.701525762s" podCreationTimestamp="2025-10-08 08:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:31.687242445 +0000 UTC m=+6014.816935056" watchObservedRunningTime="2025-10-08 08:14:31.701525762 +0000 UTC m=+6014.831218363" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.711450 4958 scope.go:117] "RemoveContainer" containerID="e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.716305 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.723159 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.736082 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.737662 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-log" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737684 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-log" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.737697 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-metadata" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737703 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-metadata" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.737720 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerName="dnsmasq-dns" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737726 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerName="dnsmasq-dns" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.737745 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerName="init" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737752 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerName="init" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737908 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-log" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737925 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" containerName="dnsmasq-dns" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.737940 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" containerName="nova-metadata-metadata" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.738864 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.742761 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.742841 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.752178 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.760343 4958 scope.go:117] "RemoveContainer" containerID="8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.761006 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d\": container with ID starting with 8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d not found: ID does not exist" containerID="8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.761062 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d"} err="failed to get container status \"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d\": rpc error: code = NotFound desc = could not find container \"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d\": container with ID starting with 8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d not found: ID does not exist" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.761090 4958 scope.go:117] "RemoveContainer" containerID="e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.761846 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a\": container with ID starting with e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a not found: ID does not exist" containerID="e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.762020 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a"} err="failed to get container status \"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a\": rpc error: code = NotFound desc = could not find container \"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a\": container with ID starting with e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a not found: ID does not exist" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.762155 4958 scope.go:117] "RemoveContainer" containerID="8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.762646 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d"} err="failed to get container status \"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d\": rpc error: code = NotFound desc = could not find container \"8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d\": container with ID starting with 8a8e9552ae7b1a34f99fb30ed4f1ae3aa989ed46132603c128b846b23800d46d not found: ID does not exist" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.762674 4958 scope.go:117] "RemoveContainer" containerID="e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.763243 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a"} err="failed to get container status \"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a\": rpc error: code = NotFound desc = could not find container \"e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a\": container with ID starting with e1f1273b905e0a9c2f2bc199bfcd2414ffc5e37aac220d397dad555a16b7ca7a not found: ID does not exist" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.763261 4958 scope.go:117] "RemoveContainer" containerID="0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.792017 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9be2f105-5dd5-432a-82b6-56410f76db3f" (UID: "9be2f105-5dd5-432a-82b6-56410f76db3f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.803518 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.803586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.803638 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkcb\" (UniqueName: \"kubernetes.io/projected/839f409b-3d12-4c87-ba33-f20a89f49c3a-kube-api-access-xqkcb\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.803702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839f409b-3d12-4c87-ba33-f20a89f49c3a-logs\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.804135 4958 scope.go:117] "RemoveContainer" containerID="43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.818624 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-config-data\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.818809 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.818942 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v4hc\" (UniqueName: \"kubernetes.io/projected/9be2f105-5dd5-432a-82b6-56410f76db3f-kube-api-access-4v4hc\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.831252 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9be2f105-5dd5-432a-82b6-56410f76db3f" (UID: "9be2f105-5dd5-432a-82b6-56410f76db3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.855420 4958 scope.go:117] "RemoveContainer" containerID="0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.866257 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9\": container with ID starting with 0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9 not found: ID does not exist" containerID="0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.866316 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9"} err="failed to get container status \"0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9\": rpc error: code = NotFound desc = could not find container \"0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9\": container with ID starting with 0409524fce15364fa930df5507fa24368fd4f0ed589ceade6df7b51ae9d9c1c9 not found: ID does not exist" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.866347 4958 scope.go:117] "RemoveContainer" containerID="43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.866694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-config" (OuterVolumeSpecName: "config") pod "9be2f105-5dd5-432a-82b6-56410f76db3f" (UID: "9be2f105-5dd5-432a-82b6-56410f76db3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: E1008 08:14:31.869238 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2\": container with ID starting with 43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2 not found: ID does not exist" containerID="43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.869387 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2"} err="failed to get container status \"43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2\": rpc error: code = NotFound desc = could not find container \"43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2\": container with ID starting with 43f7c04be995dc89a8b69e9c595c5ecdf64365b97500004a04154c63fb92fcf2 not found: ID does not exist" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.875282 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9be2f105-5dd5-432a-82b6-56410f76db3f" (UID: "9be2f105-5dd5-432a-82b6-56410f76db3f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.922032 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.922803 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.922924 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqkcb\" (UniqueName: \"kubernetes.io/projected/839f409b-3d12-4c87-ba33-f20a89f49c3a-kube-api-access-xqkcb\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.923116 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839f409b-3d12-4c87-ba33-f20a89f49c3a-logs\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.923226 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-config-data\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.923341 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.923410 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.923477 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9be2f105-5dd5-432a-82b6-56410f76db3f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.923696 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839f409b-3d12-4c87-ba33-f20a89f49c3a-logs\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.934703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.935073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.935363 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-config-data\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:31 crc kubenswrapper[4958]: I1008 08:14:31.965532 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqkcb\" (UniqueName: \"kubernetes.io/projected/839f409b-3d12-4c87-ba33-f20a89f49c3a-kube-api-access-xqkcb\") pod \"nova-metadata-0\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " pod="openstack/nova-metadata-0" Oct 08 08:14:32 crc kubenswrapper[4958]: I1008 08:14:32.008025 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56ccf77c9-l66bq"] Oct 08 08:14:32 crc kubenswrapper[4958]: I1008 08:14:32.015284 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56ccf77c9-l66bq"] Oct 08 08:14:32 crc kubenswrapper[4958]: I1008 08:14:32.018696 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:32 crc kubenswrapper[4958]: I1008 08:14:32.090560 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:32 crc kubenswrapper[4958]: I1008 08:14:32.551499 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:32 crc kubenswrapper[4958]: I1008 08:14:32.686887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"839f409b-3d12-4c87-ba33-f20a89f49c3a","Type":"ContainerStarted","Data":"e6be5f97921eeb002b7f3b73eb351140c68e7b6e6e1721ea4a16868d024936ba"} Oct 08 08:14:33 crc kubenswrapper[4958]: I1008 08:14:33.596414 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4126f380-1914-4db5-a401-ddb4aaca8bff" path="/var/lib/kubelet/pods/4126f380-1914-4db5-a401-ddb4aaca8bff/volumes" Oct 08 08:14:33 crc kubenswrapper[4958]: I1008 08:14:33.598256 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be2f105-5dd5-432a-82b6-56410f76db3f" path="/var/lib/kubelet/pods/9be2f105-5dd5-432a-82b6-56410f76db3f/volumes" Oct 08 08:14:33 crc kubenswrapper[4958]: I1008 08:14:33.707650 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"839f409b-3d12-4c87-ba33-f20a89f49c3a","Type":"ContainerStarted","Data":"1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6"} Oct 08 08:14:33 crc kubenswrapper[4958]: I1008 08:14:33.708157 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"839f409b-3d12-4c87-ba33-f20a89f49c3a","Type":"ContainerStarted","Data":"0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed"} Oct 08 08:14:33 crc kubenswrapper[4958]: I1008 08:14:33.750366 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7503373399999997 podStartE2EDuration="2.75033734s" podCreationTimestamp="2025-10-08 08:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:33.742842566 +0000 UTC m=+6016.872535237" watchObservedRunningTime="2025-10-08 08:14:33.75033734 +0000 UTC m=+6016.880029981" Oct 08 08:14:36 crc kubenswrapper[4958]: I1008 08:14:36.845386 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:14:36 crc kubenswrapper[4958]: I1008 08:14:36.845758 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:14:36 crc kubenswrapper[4958]: I1008 08:14:36.845825 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:14:36 crc kubenswrapper[4958]: I1008 08:14:36.846782 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:14:36 crc kubenswrapper[4958]: I1008 08:14:36.846890 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" gracePeriod=600 Oct 08 08:14:36 crc kubenswrapper[4958]: E1008 08:14:36.984064 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.020015 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.050871 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.082086 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gl9jm"] Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.091126 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.091185 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.091725 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gl9jm"] Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.597529 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480ac42e-df8d-4be0-bbe4-70e5195aa5cd" path="/var/lib/kubelet/pods/480ac42e-df8d-4be0-bbe4-70e5195aa5cd/volumes" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.748291 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" exitCode=0 Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.748354 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f"} Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.748412 4958 scope.go:117] "RemoveContainer" containerID="99e2aedb896ded61b3f3ce708c02241cc82327087174ed41010302ce866d005a" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.749506 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:14:37 crc kubenswrapper[4958]: E1008 08:14:37.749881 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.765898 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 08 08:14:37 crc kubenswrapper[4958]: I1008 08:14:37.978516 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.450975 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qqrbt"] Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.452125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.459809 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.460335 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.471116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqrbt"] Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.559287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggjl\" (UniqueName: \"kubernetes.io/projected/83ea6282-163f-464c-9371-4ed478535967-kube-api-access-gggjl\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.559484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-config-data\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.559745 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.559824 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-scripts\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.661675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.661750 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-scripts\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.661771 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gggjl\" (UniqueName: \"kubernetes.io/projected/83ea6282-163f-464c-9371-4ed478535967-kube-api-access-gggjl\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.661847 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-config-data\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.672848 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.684597 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-scripts\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.684673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-config-data\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.693471 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gggjl\" (UniqueName: \"kubernetes.io/projected/83ea6282-163f-464c-9371-4ed478535967-kube-api-access-gggjl\") pod \"nova-cell1-cell-mapping-qqrbt\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:38 crc kubenswrapper[4958]: I1008 08:14:38.772127 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:39 crc kubenswrapper[4958]: I1008 08:14:39.265114 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqrbt"] Oct 08 08:14:39 crc kubenswrapper[4958]: W1008 08:14:39.270013 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83ea6282_163f_464c_9371_4ed478535967.slice/crio-5c8caf196f2168ea3b12620f14b4d98ea7bed6cf25dd036c9fb615d90e5af2c1 WatchSource:0}: Error finding container 5c8caf196f2168ea3b12620f14b4d98ea7bed6cf25dd036c9fb615d90e5af2c1: Status 404 returned error can't find the container with id 5c8caf196f2168ea3b12620f14b4d98ea7bed6cf25dd036c9fb615d90e5af2c1 Oct 08 08:14:39 crc kubenswrapper[4958]: I1008 08:14:39.771039 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqrbt" event={"ID":"83ea6282-163f-464c-9371-4ed478535967","Type":"ContainerStarted","Data":"9b7e97aa01d8c1a92c538a6c98dfaf8ff151b4c1e66f278e23ffd09849b72a32"} Oct 08 08:14:39 crc kubenswrapper[4958]: I1008 08:14:39.771400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqrbt" event={"ID":"83ea6282-163f-464c-9371-4ed478535967","Type":"ContainerStarted","Data":"5c8caf196f2168ea3b12620f14b4d98ea7bed6cf25dd036c9fb615d90e5af2c1"} Oct 08 08:14:39 crc kubenswrapper[4958]: I1008 08:14:39.796377 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qqrbt" podStartSLOduration=1.796353893 podStartE2EDuration="1.796353893s" podCreationTimestamp="2025-10-08 08:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:39.792049406 +0000 UTC m=+6022.921742047" watchObservedRunningTime="2025-10-08 08:14:39.796353893 +0000 UTC m=+6022.926046534" Oct 08 08:14:39 crc kubenswrapper[4958]: I1008 08:14:39.796738 4958 scope.go:117] "RemoveContainer" containerID="2c08669a62195090d46480383589109218076d0c333a3dd4db91e66f6cfffdee" Oct 08 08:14:40 crc kubenswrapper[4958]: I1008 08:14:40.003512 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 08:14:40 crc kubenswrapper[4958]: I1008 08:14:40.003947 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 08:14:41 crc kubenswrapper[4958]: I1008 08:14:41.044536 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.92:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 08:14:41 crc kubenswrapper[4958]: I1008 08:14:41.044574 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.92:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 08:14:42 crc kubenswrapper[4958]: I1008 08:14:42.091656 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 08:14:42 crc kubenswrapper[4958]: I1008 08:14:42.091923 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 08:14:43 crc kubenswrapper[4958]: I1008 08:14:43.107478 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 08:14:43 crc kubenswrapper[4958]: I1008 08:14:43.107572 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 08:14:44 crc kubenswrapper[4958]: I1008 08:14:44.828558 4958 generic.go:334] "Generic (PLEG): container finished" podID="83ea6282-163f-464c-9371-4ed478535967" containerID="9b7e97aa01d8c1a92c538a6c98dfaf8ff151b4c1e66f278e23ffd09849b72a32" exitCode=0 Oct 08 08:14:44 crc kubenswrapper[4958]: I1008 08:14:44.828616 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqrbt" event={"ID":"83ea6282-163f-464c-9371-4ed478535967","Type":"ContainerDied","Data":"9b7e97aa01d8c1a92c538a6c98dfaf8ff151b4c1e66f278e23ffd09849b72a32"} Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.264361 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.341069 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-combined-ca-bundle\") pod \"83ea6282-163f-464c-9371-4ed478535967\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.341141 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-config-data\") pod \"83ea6282-163f-464c-9371-4ed478535967\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.341198 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-scripts\") pod \"83ea6282-163f-464c-9371-4ed478535967\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.341851 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gggjl\" (UniqueName: \"kubernetes.io/projected/83ea6282-163f-464c-9371-4ed478535967-kube-api-access-gggjl\") pod \"83ea6282-163f-464c-9371-4ed478535967\" (UID: \"83ea6282-163f-464c-9371-4ed478535967\") " Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.346919 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ea6282-163f-464c-9371-4ed478535967-kube-api-access-gggjl" (OuterVolumeSpecName: "kube-api-access-gggjl") pod "83ea6282-163f-464c-9371-4ed478535967" (UID: "83ea6282-163f-464c-9371-4ed478535967"). InnerVolumeSpecName "kube-api-access-gggjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.347391 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-scripts" (OuterVolumeSpecName: "scripts") pod "83ea6282-163f-464c-9371-4ed478535967" (UID: "83ea6282-163f-464c-9371-4ed478535967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.381398 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83ea6282-163f-464c-9371-4ed478535967" (UID: "83ea6282-163f-464c-9371-4ed478535967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.385894 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-config-data" (OuterVolumeSpecName: "config-data") pod "83ea6282-163f-464c-9371-4ed478535967" (UID: "83ea6282-163f-464c-9371-4ed478535967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.444666 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.444707 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.444721 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gggjl\" (UniqueName: \"kubernetes.io/projected/83ea6282-163f-464c-9371-4ed478535967-kube-api-access-gggjl\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.444734 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ea6282-163f-464c-9371-4ed478535967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.853303 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qqrbt" event={"ID":"83ea6282-163f-464c-9371-4ed478535967","Type":"ContainerDied","Data":"5c8caf196f2168ea3b12620f14b4d98ea7bed6cf25dd036c9fb615d90e5af2c1"} Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.853731 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8caf196f2168ea3b12620f14b4d98ea7bed6cf25dd036c9fb615d90e5af2c1" Oct 08 08:14:46 crc kubenswrapper[4958]: I1008 08:14:46.853379 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qqrbt" Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.049282 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-92d9-account-create-h22m8"] Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.064365 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-92d9-account-create-h22m8"] Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.082880 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.083149 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-log" containerID="cri-o://0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed" gracePeriod=30 Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.083554 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-metadata" containerID="cri-o://1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6" gracePeriod=30 Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.090804 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.090997 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-log" containerID="cri-o://a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c" gracePeriod=30 Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.091257 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-api" containerID="cri-o://f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42" gracePeriod=30 Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.587756 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0bfc2f8-79d0-4cba-bebd-caa291c9aa44" path="/var/lib/kubelet/pods/b0bfc2f8-79d0-4cba-bebd-caa291c9aa44/volumes" Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.864633 4958 generic.go:334] "Generic (PLEG): container finished" podID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerID="0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed" exitCode=143 Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.864742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"839f409b-3d12-4c87-ba33-f20a89f49c3a","Type":"ContainerDied","Data":"0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed"} Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.866725 4958 generic.go:334] "Generic (PLEG): container finished" podID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerID="a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c" exitCode=143 Oct 08 08:14:47 crc kubenswrapper[4958]: I1008 08:14:47.866761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7","Type":"ContainerDied","Data":"a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c"} Oct 08 08:14:48 crc kubenswrapper[4958]: I1008 08:14:48.576556 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:14:48 crc kubenswrapper[4958]: E1008 08:14:48.577231 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.781374 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.919742 4958 generic.go:334] "Generic (PLEG): container finished" podID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerID="1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6" exitCode=0 Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.919801 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.919803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"839f409b-3d12-4c87-ba33-f20a89f49c3a","Type":"ContainerDied","Data":"1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6"} Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.920132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"839f409b-3d12-4c87-ba33-f20a89f49c3a","Type":"ContainerDied","Data":"e6be5f97921eeb002b7f3b73eb351140c68e7b6e6e1721ea4a16868d024936ba"} Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.920164 4958 scope.go:117] "RemoveContainer" containerID="1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.941535 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-nova-metadata-tls-certs\") pod \"839f409b-3d12-4c87-ba33-f20a89f49c3a\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.941617 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqkcb\" (UniqueName: \"kubernetes.io/projected/839f409b-3d12-4c87-ba33-f20a89f49c3a-kube-api-access-xqkcb\") pod \"839f409b-3d12-4c87-ba33-f20a89f49c3a\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.941650 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-config-data\") pod \"839f409b-3d12-4c87-ba33-f20a89f49c3a\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.941688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839f409b-3d12-4c87-ba33-f20a89f49c3a-logs\") pod \"839f409b-3d12-4c87-ba33-f20a89f49c3a\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.941796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-combined-ca-bundle\") pod \"839f409b-3d12-4c87-ba33-f20a89f49c3a\" (UID: \"839f409b-3d12-4c87-ba33-f20a89f49c3a\") " Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.943318 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/839f409b-3d12-4c87-ba33-f20a89f49c3a-logs" (OuterVolumeSpecName: "logs") pod "839f409b-3d12-4c87-ba33-f20a89f49c3a" (UID: "839f409b-3d12-4c87-ba33-f20a89f49c3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.948258 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839f409b-3d12-4c87-ba33-f20a89f49c3a-kube-api-access-xqkcb" (OuterVolumeSpecName: "kube-api-access-xqkcb") pod "839f409b-3d12-4c87-ba33-f20a89f49c3a" (UID: "839f409b-3d12-4c87-ba33-f20a89f49c3a"). InnerVolumeSpecName "kube-api-access-xqkcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.969489 4958 scope.go:117] "RemoveContainer" containerID="0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.996773 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-config-data" (OuterVolumeSpecName: "config-data") pod "839f409b-3d12-4c87-ba33-f20a89f49c3a" (UID: "839f409b-3d12-4c87-ba33-f20a89f49c3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:50 crc kubenswrapper[4958]: I1008 08:14:50.998137 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "839f409b-3d12-4c87-ba33-f20a89f49c3a" (UID: "839f409b-3d12-4c87-ba33-f20a89f49c3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.021757 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "839f409b-3d12-4c87-ba33-f20a89f49c3a" (UID: "839f409b-3d12-4c87-ba33-f20a89f49c3a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.043832 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqkcb\" (UniqueName: \"kubernetes.io/projected/839f409b-3d12-4c87-ba33-f20a89f49c3a-kube-api-access-xqkcb\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.043863 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.043873 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839f409b-3d12-4c87-ba33-f20a89f49c3a-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.043888 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.043896 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/839f409b-3d12-4c87-ba33-f20a89f49c3a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.076879 4958 scope.go:117] "RemoveContainer" containerID="1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6" Oct 08 08:14:51 crc kubenswrapper[4958]: E1008 08:14:51.077582 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6\": container with ID starting with 1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6 not found: ID does not exist" containerID="1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.077623 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6"} err="failed to get container status \"1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6\": rpc error: code = NotFound desc = could not find container \"1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6\": container with ID starting with 1934a33b5a9363e049fe30217b34a55556496ee76969cf755a6c80746ed376c6 not found: ID does not exist" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.077652 4958 scope.go:117] "RemoveContainer" containerID="0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed" Oct 08 08:14:51 crc kubenswrapper[4958]: E1008 08:14:51.078029 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed\": container with ID starting with 0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed not found: ID does not exist" containerID="0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.078060 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed"} err="failed to get container status \"0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed\": rpc error: code = NotFound desc = could not find container \"0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed\": container with ID starting with 0f9ca7105f806435e8392150c91031888c59604da2c3639dc39a2d1b889eb7ed not found: ID does not exist" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.250223 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.257069 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.283076 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:51 crc kubenswrapper[4958]: E1008 08:14:51.283610 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ea6282-163f-464c-9371-4ed478535967" containerName="nova-manage" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.283640 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ea6282-163f-464c-9371-4ed478535967" containerName="nova-manage" Oct 08 08:14:51 crc kubenswrapper[4958]: E1008 08:14:51.283666 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-log" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.283681 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-log" Oct 08 08:14:51 crc kubenswrapper[4958]: E1008 08:14:51.283715 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-metadata" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.283727 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-metadata" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.284074 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ea6282-163f-464c-9371-4ed478535967" containerName="nova-manage" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.284114 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-metadata" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.284135 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" containerName="nova-metadata-log" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.285793 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.288889 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.289758 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.293782 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.349208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrcm\" (UniqueName: \"kubernetes.io/projected/65f973da-e0ee-44b0-8a30-4ae4540d1642-kube-api-access-lcrcm\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.349275 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-config-data\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.349559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.349821 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.349863 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f973da-e0ee-44b0-8a30-4ae4540d1642-logs\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.451970 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.452048 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f973da-e0ee-44b0-8a30-4ae4540d1642-logs\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.452156 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrcm\" (UniqueName: \"kubernetes.io/projected/65f973da-e0ee-44b0-8a30-4ae4540d1642-kube-api-access-lcrcm\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.452225 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-config-data\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.452376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.455133 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f973da-e0ee-44b0-8a30-4ae4540d1642-logs\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.465227 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-config-data\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.465315 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.465403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.474366 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrcm\" (UniqueName: \"kubernetes.io/projected/65f973da-e0ee-44b0-8a30-4ae4540d1642-kube-api-access-lcrcm\") pod \"nova-metadata-0\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " pod="openstack/nova-metadata-0" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.591647 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="839f409b-3d12-4c87-ba33-f20a89f49c3a" path="/var/lib/kubelet/pods/839f409b-3d12-4c87-ba33-f20a89f49c3a/volumes" Oct 08 08:14:51 crc kubenswrapper[4958]: I1008 08:14:51.660859 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 08:14:52 crc kubenswrapper[4958]: I1008 08:14:52.193901 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 08:14:52 crc kubenswrapper[4958]: I1008 08:14:52.945850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65f973da-e0ee-44b0-8a30-4ae4540d1642","Type":"ContainerStarted","Data":"c8fb2047104a4b75db431fb5cc9924b0d5720b75915ef0ac326799e5e137b4b6"} Oct 08 08:14:52 crc kubenswrapper[4958]: I1008 08:14:52.946174 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65f973da-e0ee-44b0-8a30-4ae4540d1642","Type":"ContainerStarted","Data":"fcb570eb52f626f2f6b6bf7efa28480231191670e8a574f4e187eb8d05973309"} Oct 08 08:14:52 crc kubenswrapper[4958]: I1008 08:14:52.946188 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65f973da-e0ee-44b0-8a30-4ae4540d1642","Type":"ContainerStarted","Data":"9e3f9e4b4bfb82f18ea61b080bbe59ec12f1c1e49071a76ad8ff0563bd427922"} Oct 08 08:14:52 crc kubenswrapper[4958]: I1008 08:14:52.979603 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.979576828 podStartE2EDuration="1.979576828s" podCreationTimestamp="2025-10-08 08:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:14:52.970561583 +0000 UTC m=+6036.100254224" watchObservedRunningTime="2025-10-08 08:14:52.979576828 +0000 UTC m=+6036.109269449" Oct 08 08:14:53 crc kubenswrapper[4958]: I1008 08:14:53.033634 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-t8f9x"] Oct 08 08:14:53 crc kubenswrapper[4958]: I1008 08:14:53.043689 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-t8f9x"] Oct 08 08:14:53 crc kubenswrapper[4958]: I1008 08:14:53.596084 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d8b431-762b-4977-b54d-04c9fd7cc9e4" path="/var/lib/kubelet/pods/13d8b431-762b-4977-b54d-04c9fd7cc9e4/volumes" Oct 08 08:14:56 crc kubenswrapper[4958]: I1008 08:14:56.661206 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 08:14:56 crc kubenswrapper[4958]: I1008 08:14:56.661625 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.026331 4958 generic.go:334] "Generic (PLEG): container finished" podID="8add678c-078e-4875-9347-bfc1dfbc09b9" containerID="53c04cbaa3b66ae24b49ba0dfcf77fd02995f8e5d1f1f79e647d1ed8c888e8ec" exitCode=137 Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.026932 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8add678c-078e-4875-9347-bfc1dfbc09b9","Type":"ContainerDied","Data":"53c04cbaa3b66ae24b49ba0dfcf77fd02995f8e5d1f1f79e647d1ed8c888e8ec"} Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.144776 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.256194 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-combined-ca-bundle\") pod \"8add678c-078e-4875-9347-bfc1dfbc09b9\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.256399 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbbzl\" (UniqueName: \"kubernetes.io/projected/8add678c-078e-4875-9347-bfc1dfbc09b9-kube-api-access-mbbzl\") pod \"8add678c-078e-4875-9347-bfc1dfbc09b9\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.256527 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-config-data\") pod \"8add678c-078e-4875-9347-bfc1dfbc09b9\" (UID: \"8add678c-078e-4875-9347-bfc1dfbc09b9\") " Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.264168 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8add678c-078e-4875-9347-bfc1dfbc09b9-kube-api-access-mbbzl" (OuterVolumeSpecName: "kube-api-access-mbbzl") pod "8add678c-078e-4875-9347-bfc1dfbc09b9" (UID: "8add678c-078e-4875-9347-bfc1dfbc09b9"). InnerVolumeSpecName "kube-api-access-mbbzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.297525 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8add678c-078e-4875-9347-bfc1dfbc09b9" (UID: "8add678c-078e-4875-9347-bfc1dfbc09b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.299835 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-config-data" (OuterVolumeSpecName: "config-data") pod "8add678c-078e-4875-9347-bfc1dfbc09b9" (UID: "8add678c-078e-4875-9347-bfc1dfbc09b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.359407 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbbzl\" (UniqueName: \"kubernetes.io/projected/8add678c-078e-4875-9347-bfc1dfbc09b9-kube-api-access-mbbzl\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.359454 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:14:59 crc kubenswrapper[4958]: I1008 08:14:59.359467 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8add678c-078e-4875-9347-bfc1dfbc09b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.003688 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.003787 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.043682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8add678c-078e-4875-9347-bfc1dfbc09b9","Type":"ContainerDied","Data":"ec1d41d71ed6c0810f93ce3487585a63a0769c6f3ec09be1e62393d8a5040ba9"} Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.043773 4958 scope.go:117] "RemoveContainer" containerID="53c04cbaa3b66ae24b49ba0dfcf77fd02995f8e5d1f1f79e647d1ed8c888e8ec" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.043838 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.090929 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.110208 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.120263 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:15:00 crc kubenswrapper[4958]: E1008 08:15:00.121371 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8add678c-078e-4875-9347-bfc1dfbc09b9" containerName="nova-scheduler-scheduler" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.127490 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8add678c-078e-4875-9347-bfc1dfbc09b9" containerName="nova-scheduler-scheduler" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.128265 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8add678c-078e-4875-9347-bfc1dfbc09b9" containerName="nova-scheduler-scheduler" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.129145 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.132571 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.148089 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.169147 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4"] Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.171076 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.173924 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.174237 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.188446 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4"] Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.281677 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-config-data\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.281965 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgtsm\" (UniqueName: \"kubernetes.io/projected/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-kube-api-access-bgtsm\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.282062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4f6f528-cc42-4c5b-a0b1-7440e503f188-secret-volume\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.282179 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.282291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4f6f528-cc42-4c5b-a0b1-7440e503f188-config-volume\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.282380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86lz\" (UniqueName: \"kubernetes.io/projected/d4f6f528-cc42-4c5b-a0b1-7440e503f188-kube-api-access-q86lz\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.385093 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgtsm\" (UniqueName: \"kubernetes.io/projected/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-kube-api-access-bgtsm\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.385173 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4f6f528-cc42-4c5b-a0b1-7440e503f188-secret-volume\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.385233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.385284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4f6f528-cc42-4c5b-a0b1-7440e503f188-config-volume\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.385331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q86lz\" (UniqueName: \"kubernetes.io/projected/d4f6f528-cc42-4c5b-a0b1-7440e503f188-kube-api-access-q86lz\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.385410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-config-data\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.387137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4f6f528-cc42-4c5b-a0b1-7440e503f188-config-volume\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.399479 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4f6f528-cc42-4c5b-a0b1-7440e503f188-secret-volume\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.399808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.399870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-config-data\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.403372 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgtsm\" (UniqueName: \"kubernetes.io/projected/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-kube-api-access-bgtsm\") pod \"nova-scheduler-0\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.414851 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q86lz\" (UniqueName: \"kubernetes.io/projected/d4f6f528-cc42-4c5b-a0b1-7440e503f188-kube-api-access-q86lz\") pod \"collect-profiles-29331855-tqcw4\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.457135 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.503329 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.954287 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 08:15:00 crc kubenswrapper[4958]: W1008 08:15:00.961501 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4d1d4c1_82c0_4fd0_b97c_4fff7269e473.slice/crio-6f0b84d50a5d62efe4586337ebc3c80d55dfadd9c4303a4d67208131d407ae7b WatchSource:0}: Error finding container 6f0b84d50a5d62efe4586337ebc3c80d55dfadd9c4303a4d67208131d407ae7b: Status 404 returned error can't find the container with id 6f0b84d50a5d62efe4586337ebc3c80d55dfadd9c4303a4d67208131d407ae7b Oct 08 08:15:00 crc kubenswrapper[4958]: I1008 08:15:00.985540 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.059502 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473","Type":"ContainerStarted","Data":"6f0b84d50a5d62efe4586337ebc3c80d55dfadd9c4303a4d67208131d407ae7b"} Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.064350 4958 generic.go:334] "Generic (PLEG): container finished" podID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerID="f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42" exitCode=0 Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.064399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7","Type":"ContainerDied","Data":"f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42"} Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.064409 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.064439 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7","Type":"ContainerDied","Data":"0b6c46b9892d1ba98b4e65b465b3fc7f07161282b61c078802659a18a958eb9f"} Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.064468 4958 scope.go:117] "RemoveContainer" containerID="f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42" Oct 08 08:15:01 crc kubenswrapper[4958]: W1008 08:15:01.075388 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f6f528_cc42_4c5b_a0b1_7440e503f188.slice/crio-427b4fa30e39c1d92d0fc324318a23db42e758aa5b4efa5e21aca8980a5f5bf7 WatchSource:0}: Error finding container 427b4fa30e39c1d92d0fc324318a23db42e758aa5b4efa5e21aca8980a5f5bf7: Status 404 returned error can't find the container with id 427b4fa30e39c1d92d0fc324318a23db42e758aa5b4efa5e21aca8980a5f5bf7 Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.075595 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4"] Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.101818 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-logs\") pod \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.102056 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-config-data\") pod \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.102135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-combined-ca-bundle\") pod \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.102255 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzhr\" (UniqueName: \"kubernetes.io/projected/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-kube-api-access-9gzhr\") pod \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\" (UID: \"aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7\") " Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.103911 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-logs" (OuterVolumeSpecName: "logs") pod "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" (UID: "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.121067 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-kube-api-access-9gzhr" (OuterVolumeSpecName: "kube-api-access-9gzhr") pod "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" (UID: "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7"). InnerVolumeSpecName "kube-api-access-9gzhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.145498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-config-data" (OuterVolumeSpecName: "config-data") pod "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" (UID: "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.145706 4958 scope.go:117] "RemoveContainer" containerID="a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.160687 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" (UID: "aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.173453 4958 scope.go:117] "RemoveContainer" containerID="f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42" Oct 08 08:15:01 crc kubenswrapper[4958]: E1008 08:15:01.173914 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42\": container with ID starting with f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42 not found: ID does not exist" containerID="f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.174400 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42"} err="failed to get container status \"f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42\": rpc error: code = NotFound desc = could not find container \"f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42\": container with ID starting with f67edf6b0c658b235974403b4ee70a3300d6fb8cdb431f924bcc8d5f2fefaf42 not found: ID does not exist" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.174441 4958 scope.go:117] "RemoveContainer" containerID="a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c" Oct 08 08:15:01 crc kubenswrapper[4958]: E1008 08:15:01.174910 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c\": container with ID starting with a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c not found: ID does not exist" containerID="a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.174999 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c"} err="failed to get container status \"a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c\": rpc error: code = NotFound desc = could not find container \"a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c\": container with ID starting with a0ee252ef7cb7b0143efa329d80f672267bc3f24f5d118ddd2f9ae7d4c7b452c not found: ID does not exist" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.205566 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzhr\" (UniqueName: \"kubernetes.io/projected/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-kube-api-access-9gzhr\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.205623 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.205638 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.205652 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.424684 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.436879 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.444504 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:01 crc kubenswrapper[4958]: E1008 08:15:01.444956 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-api" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.444973 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-api" Oct 08 08:15:01 crc kubenswrapper[4958]: E1008 08:15:01.445003 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-log" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.445009 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-log" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.445202 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-api" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.445231 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" containerName="nova-api-log" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.446282 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.450483 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.461914 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.589297 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8add678c-078e-4875-9347-bfc1dfbc09b9" path="/var/lib/kubelet/pods/8add678c-078e-4875-9347-bfc1dfbc09b9/volumes" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.590093 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7" path="/var/lib/kubelet/pods/aff83f83-abb3-4a0b-a53f-d85c8c0ce6b7/volumes" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.613135 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.613266 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-config-data\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.613288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-logs\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.613306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gp7\" (UniqueName: \"kubernetes.io/projected/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-kube-api-access-f4gp7\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.661829 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.661886 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.715370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.715742 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-config-data\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.715828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-logs\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.715900 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gp7\" (UniqueName: \"kubernetes.io/projected/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-kube-api-access-f4gp7\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.716659 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-logs\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.734468 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.741984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-config-data\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.744378 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gp7\" (UniqueName: \"kubernetes.io/projected/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-kube-api-access-f4gp7\") pod \"nova-api-0\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " pod="openstack/nova-api-0" Oct 08 08:15:01 crc kubenswrapper[4958]: I1008 08:15:01.882654 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.091624 4958 generic.go:334] "Generic (PLEG): container finished" podID="d4f6f528-cc42-4c5b-a0b1-7440e503f188" containerID="3f7b08372b887a224468086332697419e8452a13dde277d3db15efc35bd3d50e" exitCode=0 Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.091754 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" event={"ID":"d4f6f528-cc42-4c5b-a0b1-7440e503f188","Type":"ContainerDied","Data":"3f7b08372b887a224468086332697419e8452a13dde277d3db15efc35bd3d50e"} Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.091821 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" event={"ID":"d4f6f528-cc42-4c5b-a0b1-7440e503f188","Type":"ContainerStarted","Data":"427b4fa30e39c1d92d0fc324318a23db42e758aa5b4efa5e21aca8980a5f5bf7"} Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.106877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473","Type":"ContainerStarted","Data":"995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b"} Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.129672 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.1296457540000002 podStartE2EDuration="2.129645754s" podCreationTimestamp="2025-10-08 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:15:02.125688767 +0000 UTC m=+6045.255381398" watchObservedRunningTime="2025-10-08 08:15:02.129645754 +0000 UTC m=+6045.259338375" Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.199862 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:02 crc kubenswrapper[4958]: W1008 08:15:02.202527 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4bb49bf_33f0_4f38_8a0d_e3ab2b3a0efe.slice/crio-b0b01e687cce69677d9d355fcdfe28e902dc5ae12ee14a75259ee954fab40bd7 WatchSource:0}: Error finding container b0b01e687cce69677d9d355fcdfe28e902dc5ae12ee14a75259ee954fab40bd7: Status 404 returned error can't find the container with id b0b01e687cce69677d9d355fcdfe28e902dc5ae12ee14a75259ee954fab40bd7 Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.576969 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:15:02 crc kubenswrapper[4958]: E1008 08:15:02.577617 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.669239 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.95:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 08:15:02 crc kubenswrapper[4958]: I1008 08:15:02.677221 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.95:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.118687 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe","Type":"ContainerStarted","Data":"e0696f88960c6bd59a8d7f9ec8cdf25860671af3d8f862063f591bc9c21a5fa6"} Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.118728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe","Type":"ContainerStarted","Data":"da649a7e3b66deed0c06cb0bc95f237a8b0db6bc2fa3dede7115b0af1b71f0c9"} Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.118742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe","Type":"ContainerStarted","Data":"b0b01e687cce69677d9d355fcdfe28e902dc5ae12ee14a75259ee954fab40bd7"} Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.146666 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.146643763 podStartE2EDuration="2.146643763s" podCreationTimestamp="2025-10-08 08:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:15:03.143259792 +0000 UTC m=+6046.272952393" watchObservedRunningTime="2025-10-08 08:15:03.146643763 +0000 UTC m=+6046.276336364" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.429303 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.568309 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q86lz\" (UniqueName: \"kubernetes.io/projected/d4f6f528-cc42-4c5b-a0b1-7440e503f188-kube-api-access-q86lz\") pod \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.568378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4f6f528-cc42-4c5b-a0b1-7440e503f188-secret-volume\") pod \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.568452 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4f6f528-cc42-4c5b-a0b1-7440e503f188-config-volume\") pod \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\" (UID: \"d4f6f528-cc42-4c5b-a0b1-7440e503f188\") " Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.569345 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4f6f528-cc42-4c5b-a0b1-7440e503f188-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4f6f528-cc42-4c5b-a0b1-7440e503f188" (UID: "d4f6f528-cc42-4c5b-a0b1-7440e503f188"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.570380 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4f6f528-cc42-4c5b-a0b1-7440e503f188-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.573286 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6f528-cc42-4c5b-a0b1-7440e503f188-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4f6f528-cc42-4c5b-a0b1-7440e503f188" (UID: "d4f6f528-cc42-4c5b-a0b1-7440e503f188"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.581295 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f6f528-cc42-4c5b-a0b1-7440e503f188-kube-api-access-q86lz" (OuterVolumeSpecName: "kube-api-access-q86lz") pod "d4f6f528-cc42-4c5b-a0b1-7440e503f188" (UID: "d4f6f528-cc42-4c5b-a0b1-7440e503f188"). InnerVolumeSpecName "kube-api-access-q86lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.672438 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q86lz\" (UniqueName: \"kubernetes.io/projected/d4f6f528-cc42-4c5b-a0b1-7440e503f188-kube-api-access-q86lz\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:03 crc kubenswrapper[4958]: I1008 08:15:03.672669 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4f6f528-cc42-4c5b-a0b1-7440e503f188-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:04 crc kubenswrapper[4958]: I1008 08:15:04.133817 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" Oct 08 08:15:04 crc kubenswrapper[4958]: I1008 08:15:04.135847 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4" event={"ID":"d4f6f528-cc42-4c5b-a0b1-7440e503f188","Type":"ContainerDied","Data":"427b4fa30e39c1d92d0fc324318a23db42e758aa5b4efa5e21aca8980a5f5bf7"} Oct 08 08:15:04 crc kubenswrapper[4958]: I1008 08:15:04.135903 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="427b4fa30e39c1d92d0fc324318a23db42e758aa5b4efa5e21aca8980a5f5bf7" Oct 08 08:15:04 crc kubenswrapper[4958]: I1008 08:15:04.574115 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg"] Oct 08 08:15:04 crc kubenswrapper[4958]: I1008 08:15:04.584730 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331810-cr5zg"] Oct 08 08:15:05 crc kubenswrapper[4958]: I1008 08:15:05.457698 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 08:15:05 crc kubenswrapper[4958]: I1008 08:15:05.597995 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4" path="/var/lib/kubelet/pods/211143a2-0ef8-4ee6-8c2d-2e3ac0e9a8f4/volumes" Oct 08 08:15:06 crc kubenswrapper[4958]: I1008 08:15:06.037473 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-df68z"] Oct 08 08:15:06 crc kubenswrapper[4958]: I1008 08:15:06.064618 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-df68z"] Oct 08 08:15:07 crc kubenswrapper[4958]: I1008 08:15:07.600468 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22902302-0a41-43b7-8f38-35521559ce16" path="/var/lib/kubelet/pods/22902302-0a41-43b7-8f38-35521559ce16/volumes" Oct 08 08:15:10 crc kubenswrapper[4958]: I1008 08:15:10.458262 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 08:15:10 crc kubenswrapper[4958]: I1008 08:15:10.505874 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 08:15:11 crc kubenswrapper[4958]: I1008 08:15:11.269823 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 08:15:11 crc kubenswrapper[4958]: I1008 08:15:11.673305 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 08:15:11 crc kubenswrapper[4958]: I1008 08:15:11.673981 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 08:15:11 crc kubenswrapper[4958]: I1008 08:15:11.684292 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 08:15:11 crc kubenswrapper[4958]: I1008 08:15:11.883702 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 08:15:11 crc kubenswrapper[4958]: I1008 08:15:11.885321 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 08:15:12 crc kubenswrapper[4958]: I1008 08:15:12.241453 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 08:15:12 crc kubenswrapper[4958]: I1008 08:15:12.965176 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 08:15:12 crc kubenswrapper[4958]: I1008 08:15:12.965325 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.98:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 08:15:14 crc kubenswrapper[4958]: I1008 08:15:14.577587 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:15:14 crc kubenswrapper[4958]: E1008 08:15:14.579326 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:15:21 crc kubenswrapper[4958]: I1008 08:15:21.890710 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 08:15:21 crc kubenswrapper[4958]: I1008 08:15:21.891680 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 08:15:21 crc kubenswrapper[4958]: I1008 08:15:21.891842 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 08:15:21 crc kubenswrapper[4958]: I1008 08:15:21.895825 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.351315 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.357123 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.608611 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78f6dfd77-6hg6s"] Oct 08 08:15:22 crc kubenswrapper[4958]: E1008 08:15:22.609185 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f6f528-cc42-4c5b-a0b1-7440e503f188" containerName="collect-profiles" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.609204 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f6f528-cc42-4c5b-a0b1-7440e503f188" containerName="collect-profiles" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.609437 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f6f528-cc42-4c5b-a0b1-7440e503f188" containerName="collect-profiles" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.610632 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.634103 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78f6dfd77-6hg6s"] Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.718338 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-dns-svc\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.718400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-config\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.718790 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrvw\" (UniqueName: \"kubernetes.io/projected/c47e8a4d-fe22-4a8f-9959-750046c2dbff-kube-api-access-qdrvw\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.718899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-nb\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.719158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-sb\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.837175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-sb\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.837409 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-dns-svc\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.837459 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-config\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.837699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-sb\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.837747 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrvw\" (UniqueName: \"kubernetes.io/projected/c47e8a4d-fe22-4a8f-9959-750046c2dbff-kube-api-access-qdrvw\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.837841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-nb\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.838468 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-config\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.839232 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-dns-svc\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.839248 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-nb\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.863929 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrvw\" (UniqueName: \"kubernetes.io/projected/c47e8a4d-fe22-4a8f-9959-750046c2dbff-kube-api-access-qdrvw\") pod \"dnsmasq-dns-78f6dfd77-6hg6s\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:22 crc kubenswrapper[4958]: I1008 08:15:22.933754 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:23 crc kubenswrapper[4958]: I1008 08:15:23.447927 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78f6dfd77-6hg6s"] Oct 08 08:15:23 crc kubenswrapper[4958]: W1008 08:15:23.457833 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc47e8a4d_fe22_4a8f_9959_750046c2dbff.slice/crio-1c9b77b7ca9a8d996a4e8a9055e6bba0a4f1934e35b736bfeb9c67aaa252c835 WatchSource:0}: Error finding container 1c9b77b7ca9a8d996a4e8a9055e6bba0a4f1934e35b736bfeb9c67aaa252c835: Status 404 returned error can't find the container with id 1c9b77b7ca9a8d996a4e8a9055e6bba0a4f1934e35b736bfeb9c67aaa252c835 Oct 08 08:15:24 crc kubenswrapper[4958]: I1008 08:15:24.374818 4958 generic.go:334] "Generic (PLEG): container finished" podID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerID="49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f" exitCode=0 Oct 08 08:15:24 crc kubenswrapper[4958]: I1008 08:15:24.374914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" event={"ID":"c47e8a4d-fe22-4a8f-9959-750046c2dbff","Type":"ContainerDied","Data":"49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f"} Oct 08 08:15:24 crc kubenswrapper[4958]: I1008 08:15:24.375336 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" event={"ID":"c47e8a4d-fe22-4a8f-9959-750046c2dbff","Type":"ContainerStarted","Data":"1c9b77b7ca9a8d996a4e8a9055e6bba0a4f1934e35b736bfeb9c67aaa252c835"} Oct 08 08:15:25 crc kubenswrapper[4958]: I1008 08:15:25.384326 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" event={"ID":"c47e8a4d-fe22-4a8f-9959-750046c2dbff","Type":"ContainerStarted","Data":"c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d"} Oct 08 08:15:25 crc kubenswrapper[4958]: I1008 08:15:25.385899 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:25 crc kubenswrapper[4958]: I1008 08:15:25.412540 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" podStartSLOduration=3.412521026 podStartE2EDuration="3.412521026s" podCreationTimestamp="2025-10-08 08:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:15:25.402785213 +0000 UTC m=+6068.532477824" watchObservedRunningTime="2025-10-08 08:15:25.412521026 +0000 UTC m=+6068.542213627" Oct 08 08:15:25 crc kubenswrapper[4958]: I1008 08:15:25.762627 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:25 crc kubenswrapper[4958]: I1008 08:15:25.767235 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-log" containerID="cri-o://da649a7e3b66deed0c06cb0bc95f237a8b0db6bc2fa3dede7115b0af1b71f0c9" gracePeriod=30 Oct 08 08:15:25 crc kubenswrapper[4958]: I1008 08:15:25.767735 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-api" containerID="cri-o://e0696f88960c6bd59a8d7f9ec8cdf25860671af3d8f862063f591bc9c21a5fa6" gracePeriod=30 Oct 08 08:15:26 crc kubenswrapper[4958]: I1008 08:15:26.394156 4958 generic.go:334] "Generic (PLEG): container finished" podID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerID="da649a7e3b66deed0c06cb0bc95f237a8b0db6bc2fa3dede7115b0af1b71f0c9" exitCode=143 Oct 08 08:15:26 crc kubenswrapper[4958]: I1008 08:15:26.394223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe","Type":"ContainerDied","Data":"da649a7e3b66deed0c06cb0bc95f237a8b0db6bc2fa3dede7115b0af1b71f0c9"} Oct 08 08:15:28 crc kubenswrapper[4958]: I1008 08:15:28.576823 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:15:28 crc kubenswrapper[4958]: E1008 08:15:28.579919 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.435723 4958 generic.go:334] "Generic (PLEG): container finished" podID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerID="e0696f88960c6bd59a8d7f9ec8cdf25860671af3d8f862063f591bc9c21a5fa6" exitCode=0 Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.435801 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe","Type":"ContainerDied","Data":"e0696f88960c6bd59a8d7f9ec8cdf25860671af3d8f862063f591bc9c21a5fa6"} Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.436078 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe","Type":"ContainerDied","Data":"b0b01e687cce69677d9d355fcdfe28e902dc5ae12ee14a75259ee954fab40bd7"} Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.436098 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0b01e687cce69677d9d355fcdfe28e902dc5ae12ee14a75259ee954fab40bd7" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.489557 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.672582 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-logs\") pod \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.672819 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4gp7\" (UniqueName: \"kubernetes.io/projected/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-kube-api-access-f4gp7\") pod \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.672850 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-config-data\") pod \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.672988 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-combined-ca-bundle\") pod \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\" (UID: \"b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe\") " Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.675415 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-logs" (OuterVolumeSpecName: "logs") pod "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" (UID: "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.684440 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-kube-api-access-f4gp7" (OuterVolumeSpecName: "kube-api-access-f4gp7") pod "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" (UID: "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe"). InnerVolumeSpecName "kube-api-access-f4gp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.713428 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" (UID: "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.720012 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-config-data" (OuterVolumeSpecName: "config-data") pod "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" (UID: "b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.775023 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4gp7\" (UniqueName: \"kubernetes.io/projected/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-kube-api-access-f4gp7\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.775059 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.775072 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:29 crc kubenswrapper[4958]: I1008 08:15:29.775083 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.448352 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.518059 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.537556 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.548272 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:30 crc kubenswrapper[4958]: E1008 08:15:30.548744 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-api" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.548760 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-api" Oct 08 08:15:30 crc kubenswrapper[4958]: E1008 08:15:30.548780 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-log" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.548786 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-log" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.549022 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-api" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.549041 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" containerName="nova-api-log" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.550143 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.552532 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.553387 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.555083 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.570859 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.702119 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.704943 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-config-data\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.705084 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-public-tls-certs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.706119 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.706366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tm6s\" (UniqueName: \"kubernetes.io/projected/db1ceb64-6b38-468e-947a-77dfd6d79194-kube-api-access-2tm6s\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.706405 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db1ceb64-6b38-468e-947a-77dfd6d79194-logs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.808706 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-config-data\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.809220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-public-tls-certs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.809310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.809468 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tm6s\" (UniqueName: \"kubernetes.io/projected/db1ceb64-6b38-468e-947a-77dfd6d79194-kube-api-access-2tm6s\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.809506 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db1ceb64-6b38-468e-947a-77dfd6d79194-logs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.809588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.810248 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db1ceb64-6b38-468e-947a-77dfd6d79194-logs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.816056 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-public-tls-certs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.816359 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-config-data\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.816646 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.819850 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.828091 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tm6s\" (UniqueName: \"kubernetes.io/projected/db1ceb64-6b38-468e-947a-77dfd6d79194-kube-api-access-2tm6s\") pod \"nova-api-0\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " pod="openstack/nova-api-0" Oct 08 08:15:30 crc kubenswrapper[4958]: I1008 08:15:30.891498 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 08:15:31 crc kubenswrapper[4958]: I1008 08:15:31.418681 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 08:15:31 crc kubenswrapper[4958]: I1008 08:15:31.460397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db1ceb64-6b38-468e-947a-77dfd6d79194","Type":"ContainerStarted","Data":"d22b53b5c6b3a07109ee42de6a739011baabcaaf29a74ac021231cbe1fbbdf65"} Oct 08 08:15:31 crc kubenswrapper[4958]: I1008 08:15:31.605827 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe" path="/var/lib/kubelet/pods/b4bb49bf-33f0-4f38-8a0d-e3ab2b3a0efe/volumes" Oct 08 08:15:32 crc kubenswrapper[4958]: I1008 08:15:32.476168 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db1ceb64-6b38-468e-947a-77dfd6d79194","Type":"ContainerStarted","Data":"a771664f578cd8a5189ef2b9541adf0b97613f1b9c7f52da4270ef3bb98aca34"} Oct 08 08:15:32 crc kubenswrapper[4958]: I1008 08:15:32.476756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db1ceb64-6b38-468e-947a-77dfd6d79194","Type":"ContainerStarted","Data":"6c2bd6d62428e0f02518d8f61d0cc5461ff70e202877fef55a1c964bdfe0607b"} Oct 08 08:15:32 crc kubenswrapper[4958]: I1008 08:15:32.510778 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.510745991 podStartE2EDuration="2.510745991s" podCreationTimestamp="2025-10-08 08:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:15:32.509328023 +0000 UTC m=+6075.639020654" watchObservedRunningTime="2025-10-08 08:15:32.510745991 +0000 UTC m=+6075.640438622" Oct 08 08:15:32 crc kubenswrapper[4958]: I1008 08:15:32.936205 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.016179 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d96845bf9-55dwk"] Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.016525 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" podUID="492302e1-a488-4f72-9c6a-a6147340d970" containerName="dnsmasq-dns" containerID="cri-o://1b92dbe901bdf178227d7d957751eca42ca7d34f7d41b372b825d297fdc4f642" gracePeriod=10 Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.486092 4958 generic.go:334] "Generic (PLEG): container finished" podID="492302e1-a488-4f72-9c6a-a6147340d970" containerID="1b92dbe901bdf178227d7d957751eca42ca7d34f7d41b372b825d297fdc4f642" exitCode=0 Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.486176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" event={"ID":"492302e1-a488-4f72-9c6a-a6147340d970","Type":"ContainerDied","Data":"1b92dbe901bdf178227d7d957751eca42ca7d34f7d41b372b825d297fdc4f642"} Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.486550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" event={"ID":"492302e1-a488-4f72-9c6a-a6147340d970","Type":"ContainerDied","Data":"eed1a96dbd5602db2af946f060972586a51fac9048f43a9f2676c2afd7f329aa"} Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.486566 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed1a96dbd5602db2af946f060972586a51fac9048f43a9f2676c2afd7f329aa" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.551193 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.676843 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-sb\") pod \"492302e1-a488-4f72-9c6a-a6147340d970\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.676963 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-config\") pod \"492302e1-a488-4f72-9c6a-a6147340d970\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.677004 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-nb\") pod \"492302e1-a488-4f72-9c6a-a6147340d970\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.677116 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-dns-svc\") pod \"492302e1-a488-4f72-9c6a-a6147340d970\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.677172 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s4pg\" (UniqueName: \"kubernetes.io/projected/492302e1-a488-4f72-9c6a-a6147340d970-kube-api-access-9s4pg\") pod \"492302e1-a488-4f72-9c6a-a6147340d970\" (UID: \"492302e1-a488-4f72-9c6a-a6147340d970\") " Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.684750 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492302e1-a488-4f72-9c6a-a6147340d970-kube-api-access-9s4pg" (OuterVolumeSpecName: "kube-api-access-9s4pg") pod "492302e1-a488-4f72-9c6a-a6147340d970" (UID: "492302e1-a488-4f72-9c6a-a6147340d970"). InnerVolumeSpecName "kube-api-access-9s4pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.721973 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-config" (OuterVolumeSpecName: "config") pod "492302e1-a488-4f72-9c6a-a6147340d970" (UID: "492302e1-a488-4f72-9c6a-a6147340d970"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.723963 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "492302e1-a488-4f72-9c6a-a6147340d970" (UID: "492302e1-a488-4f72-9c6a-a6147340d970"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.745943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "492302e1-a488-4f72-9c6a-a6147340d970" (UID: "492302e1-a488-4f72-9c6a-a6147340d970"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.750694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "492302e1-a488-4f72-9c6a-a6147340d970" (UID: "492302e1-a488-4f72-9c6a-a6147340d970"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.785824 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.785881 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.785901 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.785920 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492302e1-a488-4f72-9c6a-a6147340d970-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:33 crc kubenswrapper[4958]: I1008 08:15:33.785937 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s4pg\" (UniqueName: \"kubernetes.io/projected/492302e1-a488-4f72-9c6a-a6147340d970-kube-api-access-9s4pg\") on node \"crc\" DevicePath \"\"" Oct 08 08:15:34 crc kubenswrapper[4958]: I1008 08:15:34.498508 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d96845bf9-55dwk" Oct 08 08:15:34 crc kubenswrapper[4958]: I1008 08:15:34.542987 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d96845bf9-55dwk"] Oct 08 08:15:34 crc kubenswrapper[4958]: I1008 08:15:34.553266 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d96845bf9-55dwk"] Oct 08 08:15:35 crc kubenswrapper[4958]: I1008 08:15:35.595297 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492302e1-a488-4f72-9c6a-a6147340d970" path="/var/lib/kubelet/pods/492302e1-a488-4f72-9c6a-a6147340d970/volumes" Oct 08 08:15:39 crc kubenswrapper[4958]: I1008 08:15:39.576782 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:15:39 crc kubenswrapper[4958]: E1008 08:15:39.579184 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:15:39 crc kubenswrapper[4958]: I1008 08:15:39.970650 4958 scope.go:117] "RemoveContainer" containerID="2c72a30d62d0f67eef6fd66117a4f084650cc519df3e0bfe6116cb7f8804b880" Oct 08 08:15:40 crc kubenswrapper[4958]: I1008 08:15:40.048659 4958 scope.go:117] "RemoveContainer" containerID="bb6cb9f529082e6f55a10481e7d79b85b96458d014c42fc78a57fe1f24d569b4" Oct 08 08:15:40 crc kubenswrapper[4958]: I1008 08:15:40.088055 4958 scope.go:117] "RemoveContainer" containerID="f6dc98b678362491dd85e6c0c1aedc3270eff489c76277354a8c2c510a023028" Oct 08 08:15:40 crc kubenswrapper[4958]: I1008 08:15:40.130327 4958 scope.go:117] "RemoveContainer" containerID="05b96fe9dcdb9fae7f3c3e47e8cfa396d8243e232e5012518cae06fa888d1004" Oct 08 08:15:40 crc kubenswrapper[4958]: I1008 08:15:40.892594 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 08:15:40 crc kubenswrapper[4958]: I1008 08:15:40.893119 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 08:15:41 crc kubenswrapper[4958]: I1008 08:15:41.909163 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.100:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 08:15:41 crc kubenswrapper[4958]: I1008 08:15:41.909121 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.100:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 08:15:50 crc kubenswrapper[4958]: I1008 08:15:50.901081 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 08:15:50 crc kubenswrapper[4958]: I1008 08:15:50.901536 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 08:15:50 crc kubenswrapper[4958]: I1008 08:15:50.902005 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 08:15:50 crc kubenswrapper[4958]: I1008 08:15:50.902063 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 08:15:50 crc kubenswrapper[4958]: I1008 08:15:50.913538 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 08:15:50 crc kubenswrapper[4958]: I1008 08:15:50.937418 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 08:15:53 crc kubenswrapper[4958]: I1008 08:15:53.577023 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:15:53 crc kubenswrapper[4958]: E1008 08:15:53.577600 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:16:08 crc kubenswrapper[4958]: I1008 08:16:08.576771 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:16:08 crc kubenswrapper[4958]: E1008 08:16:08.577384 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.922889 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6tbx2"] Oct 08 08:16:11 crc kubenswrapper[4958]: E1008 08:16:11.924035 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492302e1-a488-4f72-9c6a-a6147340d970" containerName="init" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.924055 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="492302e1-a488-4f72-9c6a-a6147340d970" containerName="init" Oct 08 08:16:11 crc kubenswrapper[4958]: E1008 08:16:11.924088 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492302e1-a488-4f72-9c6a-a6147340d970" containerName="dnsmasq-dns" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.924096 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="492302e1-a488-4f72-9c6a-a6147340d970" containerName="dnsmasq-dns" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.924327 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="492302e1-a488-4f72-9c6a-a6147340d970" containerName="dnsmasq-dns" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.925134 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.929220 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.930556 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dckj7" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.930810 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.952011 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hrj56"] Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.954417 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.969264 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2"] Oct 08 08:16:11 crc kubenswrapper[4958]: I1008 08:16:11.978766 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hrj56"] Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12392965-0a62-4b97-bd6f-fb401006313c-combined-ca-bundle\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-run-ovn\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-log-ovn\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-run\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064670 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96db30c5-800d-4bf0-afb7-5c67001f8382-scripts\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064698 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-log\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064719 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-lib\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064745 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d87kf\" (UniqueName: \"kubernetes.io/projected/96db30c5-800d-4bf0-afb7-5c67001f8382-kube-api-access-d87kf\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12392965-0a62-4b97-bd6f-fb401006313c-scripts\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-etc-ovs\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-run\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.064943 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/12392965-0a62-4b97-bd6f-fb401006313c-ovn-controller-tls-certs\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.065018 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72pqs\" (UniqueName: \"kubernetes.io/projected/12392965-0a62-4b97-bd6f-fb401006313c-kube-api-access-72pqs\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167121 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72pqs\" (UniqueName: \"kubernetes.io/projected/12392965-0a62-4b97-bd6f-fb401006313c-kube-api-access-72pqs\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167172 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12392965-0a62-4b97-bd6f-fb401006313c-combined-ca-bundle\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-run-ovn\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167238 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-log-ovn\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-run\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167346 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96db30c5-800d-4bf0-afb7-5c67001f8382-scripts\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-log\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d87kf\" (UniqueName: \"kubernetes.io/projected/96db30c5-800d-4bf0-afb7-5c67001f8382-kube-api-access-d87kf\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-lib\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167463 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12392965-0a62-4b97-bd6f-fb401006313c-scripts\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167501 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-etc-ovs\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-run\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167573 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/12392965-0a62-4b97-bd6f-fb401006313c-ovn-controller-tls-certs\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167675 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-run-ovn\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-log\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167685 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-run\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-lib\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167793 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-etc-ovs\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/96db30c5-800d-4bf0-afb7-5c67001f8382-var-run\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.167886 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12392965-0a62-4b97-bd6f-fb401006313c-var-log-ovn\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.170338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12392965-0a62-4b97-bd6f-fb401006313c-scripts\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.171428 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/96db30c5-800d-4bf0-afb7-5c67001f8382-scripts\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.173873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12392965-0a62-4b97-bd6f-fb401006313c-combined-ca-bundle\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.174115 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/12392965-0a62-4b97-bd6f-fb401006313c-ovn-controller-tls-certs\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.187339 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72pqs\" (UniqueName: \"kubernetes.io/projected/12392965-0a62-4b97-bd6f-fb401006313c-kube-api-access-72pqs\") pod \"ovn-controller-6tbx2\" (UID: \"12392965-0a62-4b97-bd6f-fb401006313c\") " pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.189275 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d87kf\" (UniqueName: \"kubernetes.io/projected/96db30c5-800d-4bf0-afb7-5c67001f8382-kube-api-access-d87kf\") pod \"ovn-controller-ovs-hrj56\" (UID: \"96db30c5-800d-4bf0-afb7-5c67001f8382\") " pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.258654 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.272218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:12 crc kubenswrapper[4958]: I1008 08:16:12.818825 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2"] Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:12.999667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2" event={"ID":"12392965-0a62-4b97-bd6f-fb401006313c","Type":"ContainerStarted","Data":"501cc3778df7753cc055c08ff8b0b56dfa5ad932d06d9935d0db88e1a6ec4b2b"} Oct 08 08:16:13 crc kubenswrapper[4958]: W1008 08:16:13.161673 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96db30c5_800d_4bf0_afb7_5c67001f8382.slice/crio-2c4e00e64804dabde371cbb444ab7bedbc86bc20b364bfae68147fe3afd2d848 WatchSource:0}: Error finding container 2c4e00e64804dabde371cbb444ab7bedbc86bc20b364bfae68147fe3afd2d848: Status 404 returned error can't find the container with id 2c4e00e64804dabde371cbb444ab7bedbc86bc20b364bfae68147fe3afd2d848 Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.164109 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hrj56"] Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.535286 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-shpqz"] Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.536707 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.541358 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.571772 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-shpqz"] Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.605607 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744988e9-f0b2-4f8e-a7d8-ba176dbba150-combined-ca-bundle\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.605657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/744988e9-f0b2-4f8e-a7d8-ba176dbba150-ovn-rundir\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.605980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/744988e9-f0b2-4f8e-a7d8-ba176dbba150-ovs-rundir\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.606146 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744988e9-f0b2-4f8e-a7d8-ba176dbba150-config\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.606215 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/744988e9-f0b2-4f8e-a7d8-ba176dbba150-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.606403 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnhlp\" (UniqueName: \"kubernetes.io/projected/744988e9-f0b2-4f8e-a7d8-ba176dbba150-kube-api-access-lnhlp\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.708732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnhlp\" (UniqueName: \"kubernetes.io/projected/744988e9-f0b2-4f8e-a7d8-ba176dbba150-kube-api-access-lnhlp\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.708823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744988e9-f0b2-4f8e-a7d8-ba176dbba150-combined-ca-bundle\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.708850 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/744988e9-f0b2-4f8e-a7d8-ba176dbba150-ovn-rundir\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.708897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/744988e9-f0b2-4f8e-a7d8-ba176dbba150-ovs-rundir\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.709220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/744988e9-f0b2-4f8e-a7d8-ba176dbba150-ovn-rundir\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.709216 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/744988e9-f0b2-4f8e-a7d8-ba176dbba150-ovs-rundir\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.709705 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744988e9-f0b2-4f8e-a7d8-ba176dbba150-config\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.709744 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/744988e9-f0b2-4f8e-a7d8-ba176dbba150-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.710828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/744988e9-f0b2-4f8e-a7d8-ba176dbba150-config\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.713879 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/744988e9-f0b2-4f8e-a7d8-ba176dbba150-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.715651 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744988e9-f0b2-4f8e-a7d8-ba176dbba150-combined-ca-bundle\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.726108 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnhlp\" (UniqueName: \"kubernetes.io/projected/744988e9-f0b2-4f8e-a7d8-ba176dbba150-kube-api-access-lnhlp\") pod \"ovn-controller-metrics-shpqz\" (UID: \"744988e9-f0b2-4f8e-a7d8-ba176dbba150\") " pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:13 crc kubenswrapper[4958]: I1008 08:16:13.875358 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-shpqz" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.009221 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2" event={"ID":"12392965-0a62-4b97-bd6f-fb401006313c","Type":"ContainerStarted","Data":"cddd8f3f4a91ff2f10af9ca8d0fb8dc9bdbf069519dc57b76c3628d8a39a213c"} Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.010268 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.014702 4958 generic.go:334] "Generic (PLEG): container finished" podID="96db30c5-800d-4bf0-afb7-5c67001f8382" containerID="35bb90a20088cf24d60a30e23ac079c4fa7557e31b7bd7174163fd47acad8da7" exitCode=0 Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.014773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hrj56" event={"ID":"96db30c5-800d-4bf0-afb7-5c67001f8382","Type":"ContainerDied","Data":"35bb90a20088cf24d60a30e23ac079c4fa7557e31b7bd7174163fd47acad8da7"} Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.014862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hrj56" event={"ID":"96db30c5-800d-4bf0-afb7-5c67001f8382","Type":"ContainerStarted","Data":"2c4e00e64804dabde371cbb444ab7bedbc86bc20b364bfae68147fe3afd2d848"} Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.033354 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6tbx2" podStartSLOduration=3.033330076 podStartE2EDuration="3.033330076s" podCreationTimestamp="2025-10-08 08:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:16:14.023477018 +0000 UTC m=+6117.153169619" watchObservedRunningTime="2025-10-08 08:16:14.033330076 +0000 UTC m=+6117.163022677" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.335198 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-shpqz"] Oct 08 08:16:14 crc kubenswrapper[4958]: W1008 08:16:14.342401 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod744988e9_f0b2_4f8e_a7d8_ba176dbba150.slice/crio-b14fe2c10c5ca43d9a2e795636431534497a495e23c24a173ab082cda54b294d WatchSource:0}: Error finding container b14fe2c10c5ca43d9a2e795636431534497a495e23c24a173ab082cda54b294d: Status 404 returned error can't find the container with id b14fe2c10c5ca43d9a2e795636431534497a495e23c24a173ab082cda54b294d Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.663285 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-84wfd"] Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.666889 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.672382 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-84wfd"] Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.731395 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7n8k\" (UniqueName: \"kubernetes.io/projected/9dcfd6f7-4693-4af1-891f-f51707e488d3-kube-api-access-z7n8k\") pod \"octavia-db-create-84wfd\" (UID: \"9dcfd6f7-4693-4af1-891f-f51707e488d3\") " pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.833889 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7n8k\" (UniqueName: \"kubernetes.io/projected/9dcfd6f7-4693-4af1-891f-f51707e488d3-kube-api-access-z7n8k\") pod \"octavia-db-create-84wfd\" (UID: \"9dcfd6f7-4693-4af1-891f-f51707e488d3\") " pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.859319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7n8k\" (UniqueName: \"kubernetes.io/projected/9dcfd6f7-4693-4af1-891f-f51707e488d3-kube-api-access-z7n8k\") pod \"octavia-db-create-84wfd\" (UID: \"9dcfd6f7-4693-4af1-891f-f51707e488d3\") " pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:14 crc kubenswrapper[4958]: I1008 08:16:14.982252 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.032164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hrj56" event={"ID":"96db30c5-800d-4bf0-afb7-5c67001f8382","Type":"ContainerStarted","Data":"c071ab92804996dad05787897577d566d826ff75333e721cb9dc160aa7adade9"} Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.032211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hrj56" event={"ID":"96db30c5-800d-4bf0-afb7-5c67001f8382","Type":"ContainerStarted","Data":"9e4df38c18c183a6697199c4841c05cfa3fe382b5b85778425d3b037b19c9d91"} Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.033051 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.033117 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.034789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-shpqz" event={"ID":"744988e9-f0b2-4f8e-a7d8-ba176dbba150","Type":"ContainerStarted","Data":"4c669769b94fa6993175224dad75f85d663f0d2082c4ff2391808946c9c668b1"} Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.034818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-shpqz" event={"ID":"744988e9-f0b2-4f8e-a7d8-ba176dbba150","Type":"ContainerStarted","Data":"b14fe2c10c5ca43d9a2e795636431534497a495e23c24a173ab082cda54b294d"} Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.063478 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hrj56" podStartSLOduration=4.0634527 podStartE2EDuration="4.0634527s" podCreationTimestamp="2025-10-08 08:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:16:15.058392713 +0000 UTC m=+6118.188085324" watchObservedRunningTime="2025-10-08 08:16:15.0634527 +0000 UTC m=+6118.193145341" Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.442818 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-shpqz" podStartSLOduration=2.442794669 podStartE2EDuration="2.442794669s" podCreationTimestamp="2025-10-08 08:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:16:15.089760513 +0000 UTC m=+6118.219453104" watchObservedRunningTime="2025-10-08 08:16:15.442794669 +0000 UTC m=+6118.572487270" Oct 08 08:16:15 crc kubenswrapper[4958]: I1008 08:16:15.451336 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-84wfd"] Oct 08 08:16:16 crc kubenswrapper[4958]: I1008 08:16:16.048476 4958 generic.go:334] "Generic (PLEG): container finished" podID="9dcfd6f7-4693-4af1-891f-f51707e488d3" containerID="1f8903cdf8caae6b96fa0af0cf79a58ee2bb885130a02be6c37c9929a120d418" exitCode=0 Oct 08 08:16:16 crc kubenswrapper[4958]: I1008 08:16:16.048899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-84wfd" event={"ID":"9dcfd6f7-4693-4af1-891f-f51707e488d3","Type":"ContainerDied","Data":"1f8903cdf8caae6b96fa0af0cf79a58ee2bb885130a02be6c37c9929a120d418"} Oct 08 08:16:16 crc kubenswrapper[4958]: I1008 08:16:16.051189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-84wfd" event={"ID":"9dcfd6f7-4693-4af1-891f-f51707e488d3","Type":"ContainerStarted","Data":"cff7bdd84040a59e285bafe6682168fa51c6d35be50b3b121af6ab40855e80ea"} Oct 08 08:16:17 crc kubenswrapper[4958]: I1008 08:16:17.549426 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:17 crc kubenswrapper[4958]: I1008 08:16:17.724557 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7n8k\" (UniqueName: \"kubernetes.io/projected/9dcfd6f7-4693-4af1-891f-f51707e488d3-kube-api-access-z7n8k\") pod \"9dcfd6f7-4693-4af1-891f-f51707e488d3\" (UID: \"9dcfd6f7-4693-4af1-891f-f51707e488d3\") " Oct 08 08:16:17 crc kubenswrapper[4958]: I1008 08:16:17.733703 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcfd6f7-4693-4af1-891f-f51707e488d3-kube-api-access-z7n8k" (OuterVolumeSpecName: "kube-api-access-z7n8k") pod "9dcfd6f7-4693-4af1-891f-f51707e488d3" (UID: "9dcfd6f7-4693-4af1-891f-f51707e488d3"). InnerVolumeSpecName "kube-api-access-z7n8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:16:17 crc kubenswrapper[4958]: I1008 08:16:17.829242 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7n8k\" (UniqueName: \"kubernetes.io/projected/9dcfd6f7-4693-4af1-891f-f51707e488d3-kube-api-access-z7n8k\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:18 crc kubenswrapper[4958]: I1008 08:16:18.076474 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-84wfd" event={"ID":"9dcfd6f7-4693-4af1-891f-f51707e488d3","Type":"ContainerDied","Data":"cff7bdd84040a59e285bafe6682168fa51c6d35be50b3b121af6ab40855e80ea"} Oct 08 08:16:18 crc kubenswrapper[4958]: I1008 08:16:18.076533 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff7bdd84040a59e285bafe6682168fa51c6d35be50b3b121af6ab40855e80ea" Oct 08 08:16:18 crc kubenswrapper[4958]: I1008 08:16:18.076605 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-84wfd" Oct 08 08:16:21 crc kubenswrapper[4958]: I1008 08:16:21.576987 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:16:21 crc kubenswrapper[4958]: E1008 08:16:21.577834 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.749664 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-d938-account-create-ddspb"] Oct 08 08:16:25 crc kubenswrapper[4958]: E1008 08:16:25.752151 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcfd6f7-4693-4af1-891f-f51707e488d3" containerName="mariadb-database-create" Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.752350 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcfd6f7-4693-4af1-891f-f51707e488d3" containerName="mariadb-database-create" Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.752841 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcfd6f7-4693-4af1-891f-f51707e488d3" containerName="mariadb-database-create" Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.753942 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.758029 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.783723 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d938-account-create-ddspb"] Oct 08 08:16:25 crc kubenswrapper[4958]: I1008 08:16:25.915684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cngdf\" (UniqueName: \"kubernetes.io/projected/74895726-5db5-4a55-b25a-4c39bac9e3a9-kube-api-access-cngdf\") pod \"octavia-d938-account-create-ddspb\" (UID: \"74895726-5db5-4a55-b25a-4c39bac9e3a9\") " pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:26 crc kubenswrapper[4958]: I1008 08:16:26.017491 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cngdf\" (UniqueName: \"kubernetes.io/projected/74895726-5db5-4a55-b25a-4c39bac9e3a9-kube-api-access-cngdf\") pod \"octavia-d938-account-create-ddspb\" (UID: \"74895726-5db5-4a55-b25a-4c39bac9e3a9\") " pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:26 crc kubenswrapper[4958]: I1008 08:16:26.052075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cngdf\" (UniqueName: \"kubernetes.io/projected/74895726-5db5-4a55-b25a-4c39bac9e3a9-kube-api-access-cngdf\") pod \"octavia-d938-account-create-ddspb\" (UID: \"74895726-5db5-4a55-b25a-4c39bac9e3a9\") " pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:26 crc kubenswrapper[4958]: I1008 08:16:26.094572 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:26 crc kubenswrapper[4958]: I1008 08:16:26.606151 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d938-account-create-ddspb"] Oct 08 08:16:27 crc kubenswrapper[4958]: I1008 08:16:27.197187 4958 generic.go:334] "Generic (PLEG): container finished" podID="74895726-5db5-4a55-b25a-4c39bac9e3a9" containerID="cb160f56f4a5dd12c225a7a2b59059438df98258d1feb96f9520017912926893" exitCode=0 Oct 08 08:16:27 crc kubenswrapper[4958]: I1008 08:16:27.197313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d938-account-create-ddspb" event={"ID":"74895726-5db5-4a55-b25a-4c39bac9e3a9","Type":"ContainerDied","Data":"cb160f56f4a5dd12c225a7a2b59059438df98258d1feb96f9520017912926893"} Oct 08 08:16:27 crc kubenswrapper[4958]: I1008 08:16:27.197620 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d938-account-create-ddspb" event={"ID":"74895726-5db5-4a55-b25a-4c39bac9e3a9","Type":"ContainerStarted","Data":"420acfaef17b00e4b33479a2bcee0e9ea5197e3b0b283b29a58b3a06ae306099"} Oct 08 08:16:28 crc kubenswrapper[4958]: I1008 08:16:28.656774 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:28 crc kubenswrapper[4958]: I1008 08:16:28.778349 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cngdf\" (UniqueName: \"kubernetes.io/projected/74895726-5db5-4a55-b25a-4c39bac9e3a9-kube-api-access-cngdf\") pod \"74895726-5db5-4a55-b25a-4c39bac9e3a9\" (UID: \"74895726-5db5-4a55-b25a-4c39bac9e3a9\") " Oct 08 08:16:28 crc kubenswrapper[4958]: I1008 08:16:28.787466 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74895726-5db5-4a55-b25a-4c39bac9e3a9-kube-api-access-cngdf" (OuterVolumeSpecName: "kube-api-access-cngdf") pod "74895726-5db5-4a55-b25a-4c39bac9e3a9" (UID: "74895726-5db5-4a55-b25a-4c39bac9e3a9"). InnerVolumeSpecName "kube-api-access-cngdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:16:28 crc kubenswrapper[4958]: I1008 08:16:28.881605 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cngdf\" (UniqueName: \"kubernetes.io/projected/74895726-5db5-4a55-b25a-4c39bac9e3a9-kube-api-access-cngdf\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:29 crc kubenswrapper[4958]: I1008 08:16:29.224289 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d938-account-create-ddspb" event={"ID":"74895726-5db5-4a55-b25a-4c39bac9e3a9","Type":"ContainerDied","Data":"420acfaef17b00e4b33479a2bcee0e9ea5197e3b0b283b29a58b3a06ae306099"} Oct 08 08:16:29 crc kubenswrapper[4958]: I1008 08:16:29.224345 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="420acfaef17b00e4b33479a2bcee0e9ea5197e3b0b283b29a58b3a06ae306099" Oct 08 08:16:29 crc kubenswrapper[4958]: I1008 08:16:29.224353 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d938-account-create-ddspb" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.750322 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-ck6fd"] Oct 08 08:16:31 crc kubenswrapper[4958]: E1008 08:16:31.752460 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74895726-5db5-4a55-b25a-4c39bac9e3a9" containerName="mariadb-account-create" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.752557 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="74895726-5db5-4a55-b25a-4c39bac9e3a9" containerName="mariadb-account-create" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.752892 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="74895726-5db5-4a55-b25a-4c39bac9e3a9" containerName="mariadb-account-create" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.753808 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.769156 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-ck6fd"] Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.847705 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hj9j\" (UniqueName: \"kubernetes.io/projected/383a0658-d336-4643-9db4-ee4d3643a7ad-kube-api-access-9hj9j\") pod \"octavia-persistence-db-create-ck6fd\" (UID: \"383a0658-d336-4643-9db4-ee4d3643a7ad\") " pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.949551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hj9j\" (UniqueName: \"kubernetes.io/projected/383a0658-d336-4643-9db4-ee4d3643a7ad-kube-api-access-9hj9j\") pod \"octavia-persistence-db-create-ck6fd\" (UID: \"383a0658-d336-4643-9db4-ee4d3643a7ad\") " pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:31 crc kubenswrapper[4958]: I1008 08:16:31.979095 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hj9j\" (UniqueName: \"kubernetes.io/projected/383a0658-d336-4643-9db4-ee4d3643a7ad-kube-api-access-9hj9j\") pod \"octavia-persistence-db-create-ck6fd\" (UID: \"383a0658-d336-4643-9db4-ee4d3643a7ad\") " pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:32 crc kubenswrapper[4958]: I1008 08:16:32.085907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:32 crc kubenswrapper[4958]: I1008 08:16:32.590823 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-ck6fd"] Oct 08 08:16:33 crc kubenswrapper[4958]: I1008 08:16:33.269409 4958 generic.go:334] "Generic (PLEG): container finished" podID="383a0658-d336-4643-9db4-ee4d3643a7ad" containerID="65c0de218ff7e978339c38dafa13ebdfb843172596c16909d0e7af2e05901c00" exitCode=0 Oct 08 08:16:33 crc kubenswrapper[4958]: I1008 08:16:33.269474 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-ck6fd" event={"ID":"383a0658-d336-4643-9db4-ee4d3643a7ad","Type":"ContainerDied","Data":"65c0de218ff7e978339c38dafa13ebdfb843172596c16909d0e7af2e05901c00"} Oct 08 08:16:33 crc kubenswrapper[4958]: I1008 08:16:33.269824 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-ck6fd" event={"ID":"383a0658-d336-4643-9db4-ee4d3643a7ad","Type":"ContainerStarted","Data":"a10d66a8a941cd0e109c3670ebf7a66a92aa80631c877192393eaeb5ac0b1bc6"} Oct 08 08:16:34 crc kubenswrapper[4958]: I1008 08:16:34.719216 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:34 crc kubenswrapper[4958]: I1008 08:16:34.815933 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hj9j\" (UniqueName: \"kubernetes.io/projected/383a0658-d336-4643-9db4-ee4d3643a7ad-kube-api-access-9hj9j\") pod \"383a0658-d336-4643-9db4-ee4d3643a7ad\" (UID: \"383a0658-d336-4643-9db4-ee4d3643a7ad\") " Oct 08 08:16:34 crc kubenswrapper[4958]: I1008 08:16:34.825510 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383a0658-d336-4643-9db4-ee4d3643a7ad-kube-api-access-9hj9j" (OuterVolumeSpecName: "kube-api-access-9hj9j") pod "383a0658-d336-4643-9db4-ee4d3643a7ad" (UID: "383a0658-d336-4643-9db4-ee4d3643a7ad"). InnerVolumeSpecName "kube-api-access-9hj9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:16:34 crc kubenswrapper[4958]: I1008 08:16:34.919125 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hj9j\" (UniqueName: \"kubernetes.io/projected/383a0658-d336-4643-9db4-ee4d3643a7ad-kube-api-access-9hj9j\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:35 crc kubenswrapper[4958]: I1008 08:16:35.295120 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-ck6fd" event={"ID":"383a0658-d336-4643-9db4-ee4d3643a7ad","Type":"ContainerDied","Data":"a10d66a8a941cd0e109c3670ebf7a66a92aa80631c877192393eaeb5ac0b1bc6"} Oct 08 08:16:35 crc kubenswrapper[4958]: I1008 08:16:35.295552 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10d66a8a941cd0e109c3670ebf7a66a92aa80631c877192393eaeb5ac0b1bc6" Oct 08 08:16:35 crc kubenswrapper[4958]: I1008 08:16:35.295253 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ck6fd" Oct 08 08:16:36 crc kubenswrapper[4958]: I1008 08:16:36.577597 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:16:36 crc kubenswrapper[4958]: E1008 08:16:36.578771 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.470541 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-103b-account-create-7sv7f"] Oct 08 08:16:42 crc kubenswrapper[4958]: E1008 08:16:42.472253 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383a0658-d336-4643-9db4-ee4d3643a7ad" containerName="mariadb-database-create" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.472275 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="383a0658-d336-4643-9db4-ee4d3643a7ad" containerName="mariadb-database-create" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.472829 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="383a0658-d336-4643-9db4-ee4d3643a7ad" containerName="mariadb-database-create" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.474558 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.477232 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.482317 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-103b-account-create-7sv7f"] Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.596099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5rd\" (UniqueName: \"kubernetes.io/projected/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f-kube-api-access-px5rd\") pod \"octavia-103b-account-create-7sv7f\" (UID: \"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f\") " pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.698297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5rd\" (UniqueName: \"kubernetes.io/projected/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f-kube-api-access-px5rd\") pod \"octavia-103b-account-create-7sv7f\" (UID: \"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f\") " pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.725533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5rd\" (UniqueName: \"kubernetes.io/projected/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f-kube-api-access-px5rd\") pod \"octavia-103b-account-create-7sv7f\" (UID: \"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f\") " pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:42 crc kubenswrapper[4958]: I1008 08:16:42.812778 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:43 crc kubenswrapper[4958]: I1008 08:16:43.361676 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-103b-account-create-7sv7f"] Oct 08 08:16:43 crc kubenswrapper[4958]: I1008 08:16:43.390862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-103b-account-create-7sv7f" event={"ID":"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f","Type":"ContainerStarted","Data":"c71f12ddfaf64900a3d6574776832316194c69b817e98541bb41d8f2f7348db5"} Oct 08 08:16:44 crc kubenswrapper[4958]: I1008 08:16:44.405766 4958 generic.go:334] "Generic (PLEG): container finished" podID="ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f" containerID="136a6aa5c8849a905ee3a27a0fccb2669ec6b8b243d3cfa61b5a9fc7ca18d260" exitCode=0 Oct 08 08:16:44 crc kubenswrapper[4958]: I1008 08:16:44.406159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-103b-account-create-7sv7f" event={"ID":"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f","Type":"ContainerDied","Data":"136a6aa5c8849a905ee3a27a0fccb2669ec6b8b243d3cfa61b5a9fc7ca18d260"} Oct 08 08:16:45 crc kubenswrapper[4958]: I1008 08:16:45.853753 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:45 crc kubenswrapper[4958]: I1008 08:16:45.965741 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px5rd\" (UniqueName: \"kubernetes.io/projected/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f-kube-api-access-px5rd\") pod \"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f\" (UID: \"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f\") " Oct 08 08:16:45 crc kubenswrapper[4958]: I1008 08:16:45.972630 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f-kube-api-access-px5rd" (OuterVolumeSpecName: "kube-api-access-px5rd") pod "ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f" (UID: "ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f"). InnerVolumeSpecName "kube-api-access-px5rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:16:46 crc kubenswrapper[4958]: I1008 08:16:46.069399 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px5rd\" (UniqueName: \"kubernetes.io/projected/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f-kube-api-access-px5rd\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:46 crc kubenswrapper[4958]: I1008 08:16:46.428009 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-103b-account-create-7sv7f" event={"ID":"ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f","Type":"ContainerDied","Data":"c71f12ddfaf64900a3d6574776832316194c69b817e98541bb41d8f2f7348db5"} Oct 08 08:16:46 crc kubenswrapper[4958]: I1008 08:16:46.428047 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71f12ddfaf64900a3d6574776832316194c69b817e98541bb41d8f2f7348db5" Oct 08 08:16:46 crc kubenswrapper[4958]: I1008 08:16:46.428662 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-103b-account-create-7sv7f" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.332867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6tbx2" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.340821 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.343625 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hrj56" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.466719 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6tbx2-config-4zff5"] Oct 08 08:16:47 crc kubenswrapper[4958]: E1008 08:16:47.467874 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f" containerName="mariadb-account-create" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.467898 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f" containerName="mariadb-account-create" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.468203 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f" containerName="mariadb-account-create" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.469202 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.472233 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.490296 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-4zff5"] Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.600768 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzvds\" (UniqueName: \"kubernetes.io/projected/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-kube-api-access-jzvds\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.600865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-scripts\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.600960 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.601020 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-additional-scripts\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.601072 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-log-ovn\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.601096 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run-ovn\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.703399 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzvds\" (UniqueName: \"kubernetes.io/projected/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-kube-api-access-jzvds\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.703476 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-scripts\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.703580 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.703611 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-additional-scripts\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.703682 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-log-ovn\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.703707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run-ovn\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.704070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run-ovn\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.707024 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.707736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-additional-scripts\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.707775 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-log-ovn\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.708915 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-scripts\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.724770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzvds\" (UniqueName: \"kubernetes.io/projected/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-kube-api-access-jzvds\") pod \"ovn-controller-6tbx2-config-4zff5\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:47 crc kubenswrapper[4958]: I1008 08:16:47.798600 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.345129 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-4zff5"] Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.413576 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6547b87d77-bg7gj"] Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.415541 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.418151 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.418358 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-qk76m" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.418566 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.422339 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.423788 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6547b87d77-bg7gj"] Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.466950 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-4zff5" event={"ID":"06ab0c3b-d7ec-426b-b98d-3105fcc74a35","Type":"ContainerStarted","Data":"d67c963d2e057b5ef10b89057495dba28e93d29ee0594996956f1de8182a8ac1"} Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.517794 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-octavia-run\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.517858 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-scripts\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.517889 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-ovndb-tls-certs\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.517919 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.517948 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-combined-ca-bundle\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.518040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data-merged\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620178 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-octavia-run\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620386 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-scripts\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620506 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-ovndb-tls-certs\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620574 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-octavia-run\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-combined-ca-bundle\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.620898 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data-merged\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.621269 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data-merged\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.625534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-combined-ca-bundle\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.625815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.626432 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-ovndb-tls-certs\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.628490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-scripts\") pod \"octavia-api-6547b87d77-bg7gj\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:48 crc kubenswrapper[4958]: I1008 08:16:48.782098 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:16:49 crc kubenswrapper[4958]: I1008 08:16:49.253431 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6547b87d77-bg7gj"] Oct 08 08:16:49 crc kubenswrapper[4958]: W1008 08:16:49.259101 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30cd577_4f62_4f09_a3e0_d46c097b6b59.slice/crio-8acd4b72a0f65249d6a4c26f3bdb8619be8056b21f40990f03185dbdc3992d19 WatchSource:0}: Error finding container 8acd4b72a0f65249d6a4c26f3bdb8619be8056b21f40990f03185dbdc3992d19: Status 404 returned error can't find the container with id 8acd4b72a0f65249d6a4c26f3bdb8619be8056b21f40990f03185dbdc3992d19 Oct 08 08:16:49 crc kubenswrapper[4958]: I1008 08:16:49.262599 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:16:49 crc kubenswrapper[4958]: I1008 08:16:49.475773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerStarted","Data":"8acd4b72a0f65249d6a4c26f3bdb8619be8056b21f40990f03185dbdc3992d19"} Oct 08 08:16:49 crc kubenswrapper[4958]: I1008 08:16:49.477519 4958 generic.go:334] "Generic (PLEG): container finished" podID="06ab0c3b-d7ec-426b-b98d-3105fcc74a35" containerID="f12889f5df0988837be76c043df62bd6f119144cea7dc91b99c66f6f2f08835e" exitCode=0 Oct 08 08:16:49 crc kubenswrapper[4958]: I1008 08:16:49.477647 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-4zff5" event={"ID":"06ab0c3b-d7ec-426b-b98d-3105fcc74a35","Type":"ContainerDied","Data":"f12889f5df0988837be76c043df62bd6f119144cea7dc91b99c66f6f2f08835e"} Oct 08 08:16:49 crc kubenswrapper[4958]: I1008 08:16:49.577360 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:16:49 crc kubenswrapper[4958]: E1008 08:16:49.578271 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.929766 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971325 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-scripts\") pod \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971434 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-log-ovn\") pod \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971485 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run-ovn\") pod \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971522 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "06ab0c3b-d7ec-426b-b98d-3105fcc74a35" (UID: "06ab0c3b-d7ec-426b-b98d-3105fcc74a35"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971563 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzvds\" (UniqueName: \"kubernetes.io/projected/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-kube-api-access-jzvds\") pod \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-additional-scripts\") pod \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run\") pod \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\" (UID: \"06ab0c3b-d7ec-426b-b98d-3105fcc74a35\") " Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971677 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "06ab0c3b-d7ec-426b-b98d-3105fcc74a35" (UID: "06ab0c3b-d7ec-426b-b98d-3105fcc74a35"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.971820 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run" (OuterVolumeSpecName: "var-run") pod "06ab0c3b-d7ec-426b-b98d-3105fcc74a35" (UID: "06ab0c3b-d7ec-426b-b98d-3105fcc74a35"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.972263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "06ab0c3b-d7ec-426b-b98d-3105fcc74a35" (UID: "06ab0c3b-d7ec-426b-b98d-3105fcc74a35"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.973116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-scripts" (OuterVolumeSpecName: "scripts") pod "06ab0c3b-d7ec-426b-b98d-3105fcc74a35" (UID: "06ab0c3b-d7ec-426b-b98d-3105fcc74a35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.976039 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.976062 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.976071 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.976080 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.976092 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:50 crc kubenswrapper[4958]: I1008 08:16:50.979185 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-kube-api-access-jzvds" (OuterVolumeSpecName: "kube-api-access-jzvds") pod "06ab0c3b-d7ec-426b-b98d-3105fcc74a35" (UID: "06ab0c3b-d7ec-426b-b98d-3105fcc74a35"). InnerVolumeSpecName "kube-api-access-jzvds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:16:51 crc kubenswrapper[4958]: I1008 08:16:51.077608 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzvds\" (UniqueName: \"kubernetes.io/projected/06ab0c3b-d7ec-426b-b98d-3105fcc74a35-kube-api-access-jzvds\") on node \"crc\" DevicePath \"\"" Oct 08 08:16:51 crc kubenswrapper[4958]: I1008 08:16:51.499600 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-4zff5" event={"ID":"06ab0c3b-d7ec-426b-b98d-3105fcc74a35","Type":"ContainerDied","Data":"d67c963d2e057b5ef10b89057495dba28e93d29ee0594996956f1de8182a8ac1"} Oct 08 08:16:51 crc kubenswrapper[4958]: I1008 08:16:51.500027 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d67c963d2e057b5ef10b89057495dba28e93d29ee0594996956f1de8182a8ac1" Oct 08 08:16:51 crc kubenswrapper[4958]: I1008 08:16:51.500095 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-4zff5" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.012359 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6tbx2-config-4zff5"] Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.026057 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6tbx2-config-4zff5"] Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.054570 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6tbx2-config-5vxvw"] Oct 08 08:16:52 crc kubenswrapper[4958]: E1008 08:16:52.055012 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ab0c3b-d7ec-426b-b98d-3105fcc74a35" containerName="ovn-config" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.055026 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ab0c3b-d7ec-426b-b98d-3105fcc74a35" containerName="ovn-config" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.055220 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ab0c3b-d7ec-426b-b98d-3105fcc74a35" containerName="ovn-config" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.055976 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.057906 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.062740 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-5vxvw"] Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.107753 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj97k\" (UniqueName: \"kubernetes.io/projected/93f0b292-1456-4d43-8687-1cc4b3d3331a-kube-api-access-mj97k\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.107832 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.107872 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-scripts\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.107930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run-ovn\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.107973 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-additional-scripts\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.108098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-log-ovn\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.209629 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.209697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-scripts\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.209766 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run-ovn\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.209798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-additional-scripts\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.209847 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-log-ovn\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.209884 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj97k\" (UniqueName: \"kubernetes.io/projected/93f0b292-1456-4d43-8687-1cc4b3d3331a-kube-api-access-mj97k\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.210007 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.210000 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run-ovn\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.210001 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-log-ovn\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.210596 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-additional-scripts\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.214109 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-scripts\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.233086 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj97k\" (UniqueName: \"kubernetes.io/projected/93f0b292-1456-4d43-8687-1cc4b3d3331a-kube-api-access-mj97k\") pod \"ovn-controller-6tbx2-config-5vxvw\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.377568 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:16:52 crc kubenswrapper[4958]: I1008 08:16:52.820399 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-5vxvw"] Oct 08 08:16:53 crc kubenswrapper[4958]: I1008 08:16:53.590815 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ab0c3b-d7ec-426b-b98d-3105fcc74a35" path="/var/lib/kubelet/pods/06ab0c3b-d7ec-426b-b98d-3105fcc74a35/volumes" Oct 08 08:16:56 crc kubenswrapper[4958]: W1008 08:16:56.892817 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93f0b292_1456_4d43_8687_1cc4b3d3331a.slice/crio-4d45eeb0628f23f9b26ea5e96325a65e8181c0a4bcaa567f752a37491b4a1779 WatchSource:0}: Error finding container 4d45eeb0628f23f9b26ea5e96325a65e8181c0a4bcaa567f752a37491b4a1779: Status 404 returned error can't find the container with id 4d45eeb0628f23f9b26ea5e96325a65e8181c0a4bcaa567f752a37491b4a1779 Oct 08 08:16:57 crc kubenswrapper[4958]: I1008 08:16:57.567594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-5vxvw" event={"ID":"93f0b292-1456-4d43-8687-1cc4b3d3331a","Type":"ContainerStarted","Data":"4d45eeb0628f23f9b26ea5e96325a65e8181c0a4bcaa567f752a37491b4a1779"} Oct 08 08:17:00 crc kubenswrapper[4958]: I1008 08:17:00.576060 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:17:00 crc kubenswrapper[4958]: E1008 08:17:00.576855 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:17:00 crc kubenswrapper[4958]: I1008 08:17:00.598717 4958 generic.go:334] "Generic (PLEG): container finished" podID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerID="5fcd6d611cced66ccf698425de4dbd960a21d9d180c4cb801806ee2703456a31" exitCode=0 Oct 08 08:17:00 crc kubenswrapper[4958]: I1008 08:17:00.598772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerDied","Data":"5fcd6d611cced66ccf698425de4dbd960a21d9d180c4cb801806ee2703456a31"} Oct 08 08:17:00 crc kubenswrapper[4958]: I1008 08:17:00.602355 4958 generic.go:334] "Generic (PLEG): container finished" podID="93f0b292-1456-4d43-8687-1cc4b3d3331a" containerID="1028747730d313a0217705c668187fbe2c9d27b09e9aab5adebf5664ef9a4704" exitCode=0 Oct 08 08:17:00 crc kubenswrapper[4958]: I1008 08:17:00.602401 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-5vxvw" event={"ID":"93f0b292-1456-4d43-8687-1cc4b3d3331a","Type":"ContainerDied","Data":"1028747730d313a0217705c668187fbe2c9d27b09e9aab5adebf5664ef9a4704"} Oct 08 08:17:01 crc kubenswrapper[4958]: I1008 08:17:01.617286 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerStarted","Data":"1df7135646bfc157a27e7356c3ff4decdd5c0f9a503eea8f3f8dd3d726695fda"} Oct 08 08:17:01 crc kubenswrapper[4958]: I1008 08:17:01.617754 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:17:01 crc kubenswrapper[4958]: I1008 08:17:01.617780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerStarted","Data":"fc06ccae025db8a190d5e111f5f046f487b0a7ef34dba346ac1d1eca4e02edf4"} Oct 08 08:17:01 crc kubenswrapper[4958]: I1008 08:17:01.617804 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:17:01 crc kubenswrapper[4958]: I1008 08:17:01.654005 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6547b87d77-bg7gj" podStartSLOduration=3.234268743 podStartE2EDuration="13.653917332s" podCreationTimestamp="2025-10-08 08:16:48 +0000 UTC" firstStartedPulling="2025-10-08 08:16:49.262379219 +0000 UTC m=+6152.392071820" lastFinishedPulling="2025-10-08 08:16:59.682027808 +0000 UTC m=+6162.811720409" observedRunningTime="2025-10-08 08:17:01.648384022 +0000 UTC m=+6164.778076673" watchObservedRunningTime="2025-10-08 08:17:01.653917332 +0000 UTC m=+6164.783609963" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.133152 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.314556 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj97k\" (UniqueName: \"kubernetes.io/projected/93f0b292-1456-4d43-8687-1cc4b3d3331a-kube-api-access-mj97k\") pod \"93f0b292-1456-4d43-8687-1cc4b3d3331a\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.314785 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-additional-scripts\") pod \"93f0b292-1456-4d43-8687-1cc4b3d3331a\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.314809 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-log-ovn\") pod \"93f0b292-1456-4d43-8687-1cc4b3d3331a\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.315028 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-scripts\") pod \"93f0b292-1456-4d43-8687-1cc4b3d3331a\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.315067 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run-ovn\") pod \"93f0b292-1456-4d43-8687-1cc4b3d3331a\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.315121 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run\") pod \"93f0b292-1456-4d43-8687-1cc4b3d3331a\" (UID: \"93f0b292-1456-4d43-8687-1cc4b3d3331a\") " Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.315634 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run" (OuterVolumeSpecName: "var-run") pod "93f0b292-1456-4d43-8687-1cc4b3d3331a" (UID: "93f0b292-1456-4d43-8687-1cc4b3d3331a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.316258 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "93f0b292-1456-4d43-8687-1cc4b3d3331a" (UID: "93f0b292-1456-4d43-8687-1cc4b3d3331a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.316316 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "93f0b292-1456-4d43-8687-1cc4b3d3331a" (UID: "93f0b292-1456-4d43-8687-1cc4b3d3331a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.316262 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "93f0b292-1456-4d43-8687-1cc4b3d3331a" (UID: "93f0b292-1456-4d43-8687-1cc4b3d3331a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.316819 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-scripts" (OuterVolumeSpecName: "scripts") pod "93f0b292-1456-4d43-8687-1cc4b3d3331a" (UID: "93f0b292-1456-4d43-8687-1cc4b3d3331a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.321550 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f0b292-1456-4d43-8687-1cc4b3d3331a-kube-api-access-mj97k" (OuterVolumeSpecName: "kube-api-access-mj97k") pod "93f0b292-1456-4d43-8687-1cc4b3d3331a" (UID: "93f0b292-1456-4d43-8687-1cc4b3d3331a"). InnerVolumeSpecName "kube-api-access-mj97k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.417540 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj97k\" (UniqueName: \"kubernetes.io/projected/93f0b292-1456-4d43-8687-1cc4b3d3331a-kube-api-access-mj97k\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.417578 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.417590 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.417599 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/93f0b292-1456-4d43-8687-1cc4b3d3331a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.417609 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.417618 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/93f0b292-1456-4d43-8687-1cc4b3d3331a-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.627194 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-5vxvw" Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.627384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-5vxvw" event={"ID":"93f0b292-1456-4d43-8687-1cc4b3d3331a","Type":"ContainerDied","Data":"4d45eeb0628f23f9b26ea5e96325a65e8181c0a4bcaa567f752a37491b4a1779"} Oct 08 08:17:02 crc kubenswrapper[4958]: I1008 08:17:02.627845 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d45eeb0628f23f9b26ea5e96325a65e8181c0a4bcaa567f752a37491b4a1779" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.223680 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6tbx2-config-5vxvw"] Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.233997 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6tbx2-config-5vxvw"] Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.300339 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6tbx2-config-gh6j4"] Oct 08 08:17:03 crc kubenswrapper[4958]: E1008 08:17:03.300840 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f0b292-1456-4d43-8687-1cc4b3d3331a" containerName="ovn-config" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.300862 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f0b292-1456-4d43-8687-1cc4b3d3331a" containerName="ovn-config" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.301155 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f0b292-1456-4d43-8687-1cc4b3d3331a" containerName="ovn-config" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.301925 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.304858 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.315205 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-gh6j4"] Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.447052 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-scripts\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.447142 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-additional-scripts\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.447197 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run-ovn\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.447216 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.447239 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-log-ovn\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.447278 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlzpl\" (UniqueName: \"kubernetes.io/projected/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-kube-api-access-hlzpl\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.549612 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-scripts\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.549709 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-additional-scripts\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.549769 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run-ovn\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.549804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.549856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-log-ovn\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.549945 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlzpl\" (UniqueName: \"kubernetes.io/projected/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-kube-api-access-hlzpl\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.550640 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run-ovn\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.550659 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.550723 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-log-ovn\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.551653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-additional-scripts\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.554051 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-scripts\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.588841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlzpl\" (UniqueName: \"kubernetes.io/projected/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-kube-api-access-hlzpl\") pod \"ovn-controller-6tbx2-config-gh6j4\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.592887 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f0b292-1456-4d43-8687-1cc4b3d3331a" path="/var/lib/kubelet/pods/93f0b292-1456-4d43-8687-1cc4b3d3331a/volumes" Oct 08 08:17:03 crc kubenswrapper[4958]: I1008 08:17:03.648048 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:04 crc kubenswrapper[4958]: W1008 08:17:04.198539 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b90a3d9_5fa5_4344_8288_06d6c81fc3b3.slice/crio-b9bbadc92e5fa25e2c77dc8daa22ad56dc67e57a545e91daacb10cd3acaa3703 WatchSource:0}: Error finding container b9bbadc92e5fa25e2c77dc8daa22ad56dc67e57a545e91daacb10cd3acaa3703: Status 404 returned error can't find the container with id b9bbadc92e5fa25e2c77dc8daa22ad56dc67e57a545e91daacb10cd3acaa3703 Oct 08 08:17:04 crc kubenswrapper[4958]: I1008 08:17:04.200059 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-gh6j4"] Oct 08 08:17:04 crc kubenswrapper[4958]: I1008 08:17:04.650938 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-gh6j4" event={"ID":"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3","Type":"ContainerStarted","Data":"c4c90acd6dbf9ed47b117cc40c7a8b4ad80683a3a8b3cc39405d50a3cb0e64ac"} Oct 08 08:17:04 crc kubenswrapper[4958]: I1008 08:17:04.651209 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-gh6j4" event={"ID":"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3","Type":"ContainerStarted","Data":"b9bbadc92e5fa25e2c77dc8daa22ad56dc67e57a545e91daacb10cd3acaa3703"} Oct 08 08:17:04 crc kubenswrapper[4958]: I1008 08:17:04.667749 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6tbx2-config-gh6j4" podStartSLOduration=1.667731559 podStartE2EDuration="1.667731559s" podCreationTimestamp="2025-10-08 08:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:17:04.666044954 +0000 UTC m=+6167.795737575" watchObservedRunningTime="2025-10-08 08:17:04.667731559 +0000 UTC m=+6167.797424160" Oct 08 08:17:05 crc kubenswrapper[4958]: I1008 08:17:05.659187 4958 generic.go:334] "Generic (PLEG): container finished" podID="5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" containerID="c4c90acd6dbf9ed47b117cc40c7a8b4ad80683a3a8b3cc39405d50a3cb0e64ac" exitCode=0 Oct 08 08:17:05 crc kubenswrapper[4958]: I1008 08:17:05.659235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-gh6j4" event={"ID":"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3","Type":"ContainerDied","Data":"c4c90acd6dbf9ed47b117cc40c7a8b4ad80683a3a8b3cc39405d50a3cb0e64ac"} Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.167569 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.284816 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6tbx2-config-gh6j4"] Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.292747 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6tbx2-config-gh6j4"] Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340340 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run-ovn\") pod \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340711 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-additional-scripts\") pod \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340461 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" (UID: "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340758 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-log-ovn\") pod \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340791 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-scripts\") pod \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340820 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" (UID: "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340963 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlzpl\" (UniqueName: \"kubernetes.io/projected/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-kube-api-access-hlzpl\") pod \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.340991 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run\") pod \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\" (UID: \"5b90a3d9-5fa5-4344-8288-06d6c81fc3b3\") " Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.341206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run" (OuterVolumeSpecName: "var-run") pod "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" (UID: "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.341497 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.341515 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.341524 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.342206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" (UID: "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.342502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-scripts" (OuterVolumeSpecName: "scripts") pod "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" (UID: "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.347121 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-kube-api-access-hlzpl" (OuterVolumeSpecName: "kube-api-access-hlzpl") pod "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" (UID: "5b90a3d9-5fa5-4344-8288-06d6c81fc3b3"). InnerVolumeSpecName "kube-api-access-hlzpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.396821 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6tbx2-config-62bt2"] Oct 08 08:17:07 crc kubenswrapper[4958]: E1008 08:17:07.397197 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" containerName="ovn-config" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.397212 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" containerName="ovn-config" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.397398 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" containerName="ovn-config" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.397996 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.420847 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-62bt2"] Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.449538 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.449586 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.449599 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlzpl\" (UniqueName: \"kubernetes.io/projected/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3-kube-api-access-hlzpl\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.551004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run-ovn\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.551064 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-scripts\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.551285 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97whh\" (UniqueName: \"kubernetes.io/projected/c9a24cbc-e89a-41a2-8286-a89182fe84ef-kube-api-access-97whh\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.551427 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-log-ovn\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.551523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.551701 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-additional-scripts\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.589138 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b90a3d9-5fa5-4344-8288-06d6c81fc3b3" path="/var/lib/kubelet/pods/5b90a3d9-5fa5-4344-8288-06d6c81fc3b3/volumes" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653096 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-additional-scripts\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run-ovn\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-scripts\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653299 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97whh\" (UniqueName: \"kubernetes.io/projected/c9a24cbc-e89a-41a2-8286-a89182fe84ef-kube-api-access-97whh\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653340 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-log-ovn\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653597 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run-ovn\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653624 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-log-ovn\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.653855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-additional-scripts\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.656305 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-scripts\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.675548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97whh\" (UniqueName: \"kubernetes.io/projected/c9a24cbc-e89a-41a2-8286-a89182fe84ef-kube-api-access-97whh\") pod \"ovn-controller-6tbx2-config-62bt2\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.682500 4958 scope.go:117] "RemoveContainer" containerID="c4c90acd6dbf9ed47b117cc40c7a8b4ad80683a3a8b3cc39405d50a3cb0e64ac" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.682622 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-gh6j4" Oct 08 08:17:07 crc kubenswrapper[4958]: I1008 08:17:07.738904 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:08 crc kubenswrapper[4958]: I1008 08:17:08.231229 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6tbx2-config-62bt2"] Oct 08 08:17:08 crc kubenswrapper[4958]: W1008 08:17:08.247191 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a24cbc_e89a_41a2_8286_a89182fe84ef.slice/crio-e03be27192e0129ab5d40fda62eed66df4ddd91d144f5006106cd75f889eadf3 WatchSource:0}: Error finding container e03be27192e0129ab5d40fda62eed66df4ddd91d144f5006106cd75f889eadf3: Status 404 returned error can't find the container with id e03be27192e0129ab5d40fda62eed66df4ddd91d144f5006106cd75f889eadf3 Oct 08 08:17:08 crc kubenswrapper[4958]: I1008 08:17:08.696588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-62bt2" event={"ID":"c9a24cbc-e89a-41a2-8286-a89182fe84ef","Type":"ContainerStarted","Data":"5cea7c922cd00bbcbfa43e066f825c3df9ab9a837cf788971356b6e4395a6039"} Oct 08 08:17:08 crc kubenswrapper[4958]: I1008 08:17:08.697024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-62bt2" event={"ID":"c9a24cbc-e89a-41a2-8286-a89182fe84ef","Type":"ContainerStarted","Data":"e03be27192e0129ab5d40fda62eed66df4ddd91d144f5006106cd75f889eadf3"} Oct 08 08:17:08 crc kubenswrapper[4958]: I1008 08:17:08.720272 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6tbx2-config-62bt2" podStartSLOduration=1.720239833 podStartE2EDuration="1.720239833s" podCreationTimestamp="2025-10-08 08:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:17:08.714402175 +0000 UTC m=+6171.844094836" watchObservedRunningTime="2025-10-08 08:17:08.720239833 +0000 UTC m=+6171.849932464" Oct 08 08:17:09 crc kubenswrapper[4958]: I1008 08:17:09.714271 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9a24cbc-e89a-41a2-8286-a89182fe84ef" containerID="5cea7c922cd00bbcbfa43e066f825c3df9ab9a837cf788971356b6e4395a6039" exitCode=0 Oct 08 08:17:09 crc kubenswrapper[4958]: I1008 08:17:09.714348 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6tbx2-config-62bt2" event={"ID":"c9a24cbc-e89a-41a2-8286-a89182fe84ef","Type":"ContainerDied","Data":"5cea7c922cd00bbcbfa43e066f825c3df9ab9a837cf788971356b6e4395a6039"} Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.130599 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-log-ovn\") pod \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run\") pod \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242801 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run-ovn\") pod \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242836 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c9a24cbc-e89a-41a2-8286-a89182fe84ef" (UID: "c9a24cbc-e89a-41a2-8286-a89182fe84ef"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242866 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run" (OuterVolumeSpecName: "var-run") pod "c9a24cbc-e89a-41a2-8286-a89182fe84ef" (UID: "c9a24cbc-e89a-41a2-8286-a89182fe84ef"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-scripts\") pod \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97whh\" (UniqueName: \"kubernetes.io/projected/c9a24cbc-e89a-41a2-8286-a89182fe84ef-kube-api-access-97whh\") pod \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.242963 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-additional-scripts\") pod \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\" (UID: \"c9a24cbc-e89a-41a2-8286-a89182fe84ef\") " Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.243029 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c9a24cbc-e89a-41a2-8286-a89182fe84ef" (UID: "c9a24cbc-e89a-41a2-8286-a89182fe84ef"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.243570 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.243599 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.243611 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a24cbc-e89a-41a2-8286-a89182fe84ef-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.243708 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c9a24cbc-e89a-41a2-8286-a89182fe84ef" (UID: "c9a24cbc-e89a-41a2-8286-a89182fe84ef"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.243995 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-scripts" (OuterVolumeSpecName: "scripts") pod "c9a24cbc-e89a-41a2-8286-a89182fe84ef" (UID: "c9a24cbc-e89a-41a2-8286-a89182fe84ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.248597 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a24cbc-e89a-41a2-8286-a89182fe84ef-kube-api-access-97whh" (OuterVolumeSpecName: "kube-api-access-97whh") pod "c9a24cbc-e89a-41a2-8286-a89182fe84ef" (UID: "c9a24cbc-e89a-41a2-8286-a89182fe84ef"). InnerVolumeSpecName "kube-api-access-97whh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.315455 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6tbx2-config-62bt2"] Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.325139 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6tbx2-config-62bt2"] Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.345936 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.346011 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97whh\" (UniqueName: \"kubernetes.io/projected/c9a24cbc-e89a-41a2-8286-a89182fe84ef-kube-api-access-97whh\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.346036 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a24cbc-e89a-41a2-8286-a89182fe84ef-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.577449 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:17:11 crc kubenswrapper[4958]: E1008 08:17:11.578354 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.597007 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a24cbc-e89a-41a2-8286-a89182fe84ef" path="/var/lib/kubelet/pods/c9a24cbc-e89a-41a2-8286-a89182fe84ef/volumes" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.737580 4958 scope.go:117] "RemoveContainer" containerID="5cea7c922cd00bbcbfa43e066f825c3df9ab9a837cf788971356b6e4395a6039" Oct 08 08:17:11 crc kubenswrapper[4958]: I1008 08:17:11.737602 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6tbx2-config-62bt2" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.563818 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-9nlmv"] Oct 08 08:17:14 crc kubenswrapper[4958]: E1008 08:17:14.564848 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a24cbc-e89a-41a2-8286-a89182fe84ef" containerName="ovn-config" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.564875 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a24cbc-e89a-41a2-8286-a89182fe84ef" containerName="ovn-config" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.565258 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a24cbc-e89a-41a2-8286-a89182fe84ef" containerName="ovn-config" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.567484 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.573825 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.574269 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.575269 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.582605 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-9nlmv"] Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.718555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/0f1e2c22-078a-46e8-806f-db4f89afee77-hm-ports\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.718718 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f1e2c22-078a-46e8-806f-db4f89afee77-config-data-merged\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.718789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1e2c22-078a-46e8-806f-db4f89afee77-config-data\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.720429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1e2c22-078a-46e8-806f-db4f89afee77-scripts\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.822391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1e2c22-078a-46e8-806f-db4f89afee77-scripts\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.822503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/0f1e2c22-078a-46e8-806f-db4f89afee77-hm-ports\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.822590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f1e2c22-078a-46e8-806f-db4f89afee77-config-data-merged\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.822628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1e2c22-078a-46e8-806f-db4f89afee77-config-data\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.824519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f1e2c22-078a-46e8-806f-db4f89afee77-config-data-merged\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.825448 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/0f1e2c22-078a-46e8-806f-db4f89afee77-hm-ports\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.830783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f1e2c22-078a-46e8-806f-db4f89afee77-scripts\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.832383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1e2c22-078a-46e8-806f-db4f89afee77-config-data\") pod \"octavia-rsyslog-9nlmv\" (UID: \"0f1e2c22-078a-46e8-806f-db4f89afee77\") " pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:14 crc kubenswrapper[4958]: I1008 08:17:14.903859 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.254502 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-678599687f-wt76n"] Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.258761 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.261216 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.268936 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-wt76n"] Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.336372 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-httpd-config\") pod \"octavia-image-upload-678599687f-wt76n\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.336502 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-amphora-image\") pod \"octavia-image-upload-678599687f-wt76n\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.436189 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-9nlmv"] Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.438502 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-httpd-config\") pod \"octavia-image-upload-678599687f-wt76n\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.438595 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-amphora-image\") pod \"octavia-image-upload-678599687f-wt76n\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.439126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-amphora-image\") pod \"octavia-image-upload-678599687f-wt76n\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.457348 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-httpd-config\") pod \"octavia-image-upload-678599687f-wt76n\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.581530 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:17:15 crc kubenswrapper[4958]: I1008 08:17:15.786220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9nlmv" event={"ID":"0f1e2c22-078a-46e8-806f-db4f89afee77","Type":"ContainerStarted","Data":"8d1a7f15d96af4dda0f57ce6ac7a0f8439fbd46e2ed780aaff33997b5526b3e2"} Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.047596 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-wt76n"] Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.757869 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6c7c5889cc-cf95j"] Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.760247 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.762513 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.762683 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.782341 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6c7c5889cc-cf95j"] Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.813968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-wt76n" event={"ID":"ff98a8f1-d9db-49a8-a16c-e6536f70f86c","Type":"ContainerStarted","Data":"871f4ac25ef52055bc1ac6e3ad40df62ec347e858ccd1e0e77a11dbaf7e513a5"} Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881055 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-internal-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881121 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-public-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881168 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-config-data\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881198 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-scripts\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881239 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-combined-ca-bundle\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-octavia-run\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881316 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-config-data-merged\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.881336 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-ovndb-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983091 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-public-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983396 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-config-data\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983434 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-scripts\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983476 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-combined-ca-bundle\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983529 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-octavia-run\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983556 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-config-data-merged\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983577 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-ovndb-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.983602 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-internal-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.984613 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-config-data-merged\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.984918 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-octavia-run\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.988980 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-internal-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.989127 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-public-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.989639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-config-data\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.989878 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-scripts\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.989974 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-ovndb-tls-certs\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:16 crc kubenswrapper[4958]: I1008 08:17:16.991591 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce43a6-91ba-4dd4-8d78-f8f70ff49236-combined-ca-bundle\") pod \"octavia-api-6c7c5889cc-cf95j\" (UID: \"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236\") " pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:17 crc kubenswrapper[4958]: I1008 08:17:17.088463 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:17 crc kubenswrapper[4958]: I1008 08:17:17.532241 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6c7c5889cc-cf95j"] Oct 08 08:17:17 crc kubenswrapper[4958]: W1008 08:17:17.791148 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ce43a6_91ba_4dd4_8d78_f8f70ff49236.slice/crio-f6ec0f5132919f073eb828be45324f165a21e16b648bb4672f40548f466f97d0 WatchSource:0}: Error finding container f6ec0f5132919f073eb828be45324f165a21e16b648bb4672f40548f466f97d0: Status 404 returned error can't find the container with id f6ec0f5132919f073eb828be45324f165a21e16b648bb4672f40548f466f97d0 Oct 08 08:17:17 crc kubenswrapper[4958]: I1008 08:17:17.831612 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c7c5889cc-cf95j" event={"ID":"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236","Type":"ContainerStarted","Data":"f6ec0f5132919f073eb828be45324f165a21e16b648bb4672f40548f466f97d0"} Oct 08 08:17:17 crc kubenswrapper[4958]: I1008 08:17:17.833277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9nlmv" event={"ID":"0f1e2c22-078a-46e8-806f-db4f89afee77","Type":"ContainerStarted","Data":"3b9da1c1a0ae84c3e0e85595f4b8da02ba07239c5b7ad95403703a0e99f503bc"} Oct 08 08:17:18 crc kubenswrapper[4958]: I1008 08:17:18.847204 4958 generic.go:334] "Generic (PLEG): container finished" podID="b0ce43a6-91ba-4dd4-8d78-f8f70ff49236" containerID="7483744884cbe9b61418cf43a5d4233d096986af79c957dafae33378c15fa6b0" exitCode=0 Oct 08 08:17:18 crc kubenswrapper[4958]: I1008 08:17:18.847291 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c7c5889cc-cf95j" event={"ID":"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236","Type":"ContainerDied","Data":"7483744884cbe9b61418cf43a5d4233d096986af79c957dafae33378c15fa6b0"} Oct 08 08:17:19 crc kubenswrapper[4958]: I1008 08:17:19.865542 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c7c5889cc-cf95j" event={"ID":"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236","Type":"ContainerStarted","Data":"5055460ced4b88e23b9814859ee3563d54b6d00accef83e64db37f2b8ed94c21"} Oct 08 08:17:19 crc kubenswrapper[4958]: I1008 08:17:19.866015 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c7c5889cc-cf95j" event={"ID":"b0ce43a6-91ba-4dd4-8d78-f8f70ff49236","Type":"ContainerStarted","Data":"1b8f99b27d73b95bc67165592612033792183c4949932a397a58e2a135800019"} Oct 08 08:17:19 crc kubenswrapper[4958]: I1008 08:17:19.866035 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:19 crc kubenswrapper[4958]: I1008 08:17:19.867708 4958 generic.go:334] "Generic (PLEG): container finished" podID="0f1e2c22-078a-46e8-806f-db4f89afee77" containerID="3b9da1c1a0ae84c3e0e85595f4b8da02ba07239c5b7ad95403703a0e99f503bc" exitCode=0 Oct 08 08:17:19 crc kubenswrapper[4958]: I1008 08:17:19.867753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9nlmv" event={"ID":"0f1e2c22-078a-46e8-806f-db4f89afee77","Type":"ContainerDied","Data":"3b9da1c1a0ae84c3e0e85595f4b8da02ba07239c5b7ad95403703a0e99f503bc"} Oct 08 08:17:19 crc kubenswrapper[4958]: I1008 08:17:19.890717 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6c7c5889cc-cf95j" podStartSLOduration=3.890698956 podStartE2EDuration="3.890698956s" podCreationTimestamp="2025-10-08 08:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:17:19.883976224 +0000 UTC m=+6183.013668825" watchObservedRunningTime="2025-10-08 08:17:19.890698956 +0000 UTC m=+6183.020391557" Oct 08 08:17:20 crc kubenswrapper[4958]: I1008 08:17:20.876038 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:22 crc kubenswrapper[4958]: I1008 08:17:22.654883 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:17:22 crc kubenswrapper[4958]: I1008 08:17:22.765925 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:17:22 crc kubenswrapper[4958]: I1008 08:17:22.899024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9nlmv" event={"ID":"0f1e2c22-078a-46e8-806f-db4f89afee77","Type":"ContainerStarted","Data":"50f2c01d77404cf3b025544f69aa64404b8c4a7db977410dbb9fa743fcf9804b"} Oct 08 08:17:22 crc kubenswrapper[4958]: I1008 08:17:22.899770 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:22 crc kubenswrapper[4958]: I1008 08:17:22.921316 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-9nlmv" podStartSLOduration=2.693190321 podStartE2EDuration="8.921298668s" podCreationTimestamp="2025-10-08 08:17:14 +0000 UTC" firstStartedPulling="2025-10-08 08:17:15.44693688 +0000 UTC m=+6178.576629501" lastFinishedPulling="2025-10-08 08:17:21.675045247 +0000 UTC m=+6184.804737848" observedRunningTime="2025-10-08 08:17:22.914605617 +0000 UTC m=+6186.044298238" watchObservedRunningTime="2025-10-08 08:17:22.921298668 +0000 UTC m=+6186.050991269" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.030667 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-wl4s2"] Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.042720 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-wl4s2"] Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.686864 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-fgmk8"] Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.690266 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.692539 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.715730 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-fgmk8"] Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.776031 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.776089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-combined-ca-bundle\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.776118 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-scripts\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.776155 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data-merged\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.877825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data-merged\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.878063 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.878084 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-combined-ca-bundle\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.878122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-scripts\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.879403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data-merged\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.887788 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-combined-ca-bundle\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.888519 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:24 crc kubenswrapper[4958]: I1008 08:17:24.890055 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-scripts\") pod \"octavia-db-sync-fgmk8\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:25 crc kubenswrapper[4958]: I1008 08:17:25.019577 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:25 crc kubenswrapper[4958]: I1008 08:17:25.586991 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c378af2-e833-4ade-a1c8-571b544a4ef2" path="/var/lib/kubelet/pods/2c378af2-e833-4ade-a1c8-571b544a4ef2/volumes" Oct 08 08:17:26 crc kubenswrapper[4958]: I1008 08:17:26.576713 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:17:26 crc kubenswrapper[4958]: E1008 08:17:26.577200 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:17:28 crc kubenswrapper[4958]: I1008 08:17:28.966131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-wt76n" event={"ID":"ff98a8f1-d9db-49a8-a16c-e6536f70f86c","Type":"ContainerStarted","Data":"df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510"} Oct 08 08:17:29 crc kubenswrapper[4958]: I1008 08:17:29.032732 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-fgmk8"] Oct 08 08:17:29 crc kubenswrapper[4958]: I1008 08:17:29.935656 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-9nlmv" Oct 08 08:17:29 crc kubenswrapper[4958]: I1008 08:17:29.981712 4958 generic.go:334] "Generic (PLEG): container finished" podID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerID="6e00baa3972dcae121923021a972c4bb324f7738dede3a511a018bb232300b57" exitCode=0 Oct 08 08:17:29 crc kubenswrapper[4958]: I1008 08:17:29.981812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-fgmk8" event={"ID":"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a","Type":"ContainerDied","Data":"6e00baa3972dcae121923021a972c4bb324f7738dede3a511a018bb232300b57"} Oct 08 08:17:29 crc kubenswrapper[4958]: I1008 08:17:29.981860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-fgmk8" event={"ID":"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a","Type":"ContainerStarted","Data":"6443a090b5e874d4f333e4b801930240fd10d4aeef2852abffe4ad072f3debc8"} Oct 08 08:17:31 crc kubenswrapper[4958]: I1008 08:17:31.053750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-fgmk8" event={"ID":"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a","Type":"ContainerStarted","Data":"de545f2c479786b3f9ba809f4d1b22dd187dfae5a251fbdfa2cd26baca57991f"} Oct 08 08:17:31 crc kubenswrapper[4958]: I1008 08:17:31.084811 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-fgmk8" podStartSLOduration=7.084796132 podStartE2EDuration="7.084796132s" podCreationTimestamp="2025-10-08 08:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:17:31.077312559 +0000 UTC m=+6194.207005160" watchObservedRunningTime="2025-10-08 08:17:31.084796132 +0000 UTC m=+6194.214488733" Oct 08 08:17:32 crc kubenswrapper[4958]: I1008 08:17:32.064286 4958 generic.go:334] "Generic (PLEG): container finished" podID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerID="df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510" exitCode=0 Oct 08 08:17:32 crc kubenswrapper[4958]: I1008 08:17:32.064366 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-wt76n" event={"ID":"ff98a8f1-d9db-49a8-a16c-e6536f70f86c","Type":"ContainerDied","Data":"df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510"} Oct 08 08:17:33 crc kubenswrapper[4958]: I1008 08:17:33.022331 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-52fd-account-create-h7bcj"] Oct 08 08:17:33 crc kubenswrapper[4958]: I1008 08:17:33.035017 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-52fd-account-create-h7bcj"] Oct 08 08:17:33 crc kubenswrapper[4958]: I1008 08:17:33.085071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-wt76n" event={"ID":"ff98a8f1-d9db-49a8-a16c-e6536f70f86c","Type":"ContainerStarted","Data":"0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270"} Oct 08 08:17:33 crc kubenswrapper[4958]: I1008 08:17:33.104831 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-678599687f-wt76n" podStartSLOduration=5.526575817 podStartE2EDuration="18.104814909s" podCreationTimestamp="2025-10-08 08:17:15 +0000 UTC" firstStartedPulling="2025-10-08 08:17:16.067136066 +0000 UTC m=+6179.196828677" lastFinishedPulling="2025-10-08 08:17:28.645375168 +0000 UTC m=+6191.775067769" observedRunningTime="2025-10-08 08:17:33.102842066 +0000 UTC m=+6196.232534707" watchObservedRunningTime="2025-10-08 08:17:33.104814909 +0000 UTC m=+6196.234507510" Oct 08 08:17:33 crc kubenswrapper[4958]: I1008 08:17:33.587832 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486d7734-8fa0-4291-9ceb-c70f95467f5e" path="/var/lib/kubelet/pods/486d7734-8fa0-4291-9ceb-c70f95467f5e/volumes" Oct 08 08:17:34 crc kubenswrapper[4958]: I1008 08:17:34.096600 4958 generic.go:334] "Generic (PLEG): container finished" podID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerID="de545f2c479786b3f9ba809f4d1b22dd187dfae5a251fbdfa2cd26baca57991f" exitCode=0 Oct 08 08:17:34 crc kubenswrapper[4958]: I1008 08:17:34.096676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-fgmk8" event={"ID":"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a","Type":"ContainerDied","Data":"de545f2c479786b3f9ba809f4d1b22dd187dfae5a251fbdfa2cd26baca57991f"} Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.542002 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.620327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data-merged\") pod \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.620447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data\") pod \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.620483 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-combined-ca-bundle\") pod \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.620698 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-scripts\") pod \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\" (UID: \"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a\") " Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.640551 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-scripts" (OuterVolumeSpecName: "scripts") pod "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" (UID: "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.640653 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data" (OuterVolumeSpecName: "config-data") pod "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" (UID: "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.648444 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" (UID: "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.663878 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" (UID: "a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.707063 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4qjw6"] Oct 08 08:17:35 crc kubenswrapper[4958]: E1008 08:17:35.707441 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerName="init" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.707457 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerName="init" Oct 08 08:17:35 crc kubenswrapper[4958]: E1008 08:17:35.707470 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerName="octavia-db-sync" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.707476 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerName="octavia-db-sync" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.707655 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" containerName="octavia-db-sync" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.708956 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.719824 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qjw6"] Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.722626 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.722652 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data-merged\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.722661 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.722670 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.824063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-utilities\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.824478 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57nlk\" (UniqueName: \"kubernetes.io/projected/e4b0f060-a5e3-4634-a9a8-a81635d0579f-kube-api-access-57nlk\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.824649 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-catalog-content\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.926872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57nlk\" (UniqueName: \"kubernetes.io/projected/e4b0f060-a5e3-4634-a9a8-a81635d0579f-kube-api-access-57nlk\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.926978 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-catalog-content\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.927064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-utilities\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.927610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-utilities\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.928124 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-catalog-content\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:35 crc kubenswrapper[4958]: I1008 08:17:35.944492 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57nlk\" (UniqueName: \"kubernetes.io/projected/e4b0f060-a5e3-4634-a9a8-a81635d0579f-kube-api-access-57nlk\") pod \"community-operators-4qjw6\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.089364 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.140676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-fgmk8" event={"ID":"a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a","Type":"ContainerDied","Data":"6443a090b5e874d4f333e4b801930240fd10d4aeef2852abffe4ad072f3debc8"} Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.140717 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-fgmk8" Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.140723 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6443a090b5e874d4f333e4b801930240fd10d4aeef2852abffe4ad072f3debc8" Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.499777 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.532264 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6c7c5889cc-cf95j" Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.613374 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qjw6"] Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.625173 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6547b87d77-bg7gj"] Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.625418 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6547b87d77-bg7gj" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api" containerID="cri-o://fc06ccae025db8a190d5e111f5f046f487b0a7ef34dba346ac1d1eca4e02edf4" gracePeriod=30 Oct 08 08:17:36 crc kubenswrapper[4958]: I1008 08:17:36.625619 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6547b87d77-bg7gj" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api-provider-agent" containerID="cri-o://1df7135646bfc157a27e7356c3ff4decdd5c0f9a503eea8f3f8dd3d726695fda" gracePeriod=30 Oct 08 08:17:37 crc kubenswrapper[4958]: I1008 08:17:37.155447 4958 generic.go:334] "Generic (PLEG): container finished" podID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerID="07fce00142d6436cd35f343676a8cd512c5f88425ec9966eb55c51a8e8f57244" exitCode=0 Oct 08 08:17:37 crc kubenswrapper[4958]: I1008 08:17:37.158058 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerDied","Data":"07fce00142d6436cd35f343676a8cd512c5f88425ec9966eb55c51a8e8f57244"} Oct 08 08:17:37 crc kubenswrapper[4958]: I1008 08:17:37.158103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerStarted","Data":"5c2f79e056165b7975fac3234e7a117736f8c12f38c54e6e00eb3f081ed693d2"} Oct 08 08:17:37 crc kubenswrapper[4958]: I1008 08:17:37.583563 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:17:37 crc kubenswrapper[4958]: E1008 08:17:37.583826 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:17:38 crc kubenswrapper[4958]: I1008 08:17:38.168465 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerStarted","Data":"a239515479bdd67b939f1e0ec899a8a57aaa9cd27b9f4d35fb253b1c41ef88a2"} Oct 08 08:17:38 crc kubenswrapper[4958]: I1008 08:17:38.171840 4958 generic.go:334] "Generic (PLEG): container finished" podID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerID="1df7135646bfc157a27e7356c3ff4decdd5c0f9a503eea8f3f8dd3d726695fda" exitCode=0 Oct 08 08:17:38 crc kubenswrapper[4958]: I1008 08:17:38.171881 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerDied","Data":"1df7135646bfc157a27e7356c3ff4decdd5c0f9a503eea8f3f8dd3d726695fda"} Oct 08 08:17:39 crc kubenswrapper[4958]: I1008 08:17:39.181940 4958 generic.go:334] "Generic (PLEG): container finished" podID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerID="a239515479bdd67b939f1e0ec899a8a57aaa9cd27b9f4d35fb253b1c41ef88a2" exitCode=0 Oct 08 08:17:39 crc kubenswrapper[4958]: I1008 08:17:39.182161 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerDied","Data":"a239515479bdd67b939f1e0ec899a8a57aaa9cd27b9f4d35fb253b1c41ef88a2"} Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.051846 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g6jkj"] Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.067411 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g6jkj"] Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.198315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerStarted","Data":"ce51087ec1596d3c34854e85180eec2d0e461a9f024f5ee0467e1849d9015c68"} Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.202257 4958 generic.go:334] "Generic (PLEG): container finished" podID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerID="fc06ccae025db8a190d5e111f5f046f487b0a7ef34dba346ac1d1eca4e02edf4" exitCode=0 Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.202297 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerDied","Data":"fc06ccae025db8a190d5e111f5f046f487b0a7ef34dba346ac1d1eca4e02edf4"} Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.352702 4958 scope.go:117] "RemoveContainer" containerID="d7ee794a1cf80d22f4614ea9d3dab28733e1c80299641c3ed549f02039691c15" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.387162 4958 scope.go:117] "RemoveContainer" containerID="1b902b7874c3e6f551ba898c38c294c446cc7bd5e04191a5fc50d9182e183e74" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.476992 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.507683 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4qjw6" podStartSLOduration=2.8864187599999998 podStartE2EDuration="5.507661579s" podCreationTimestamp="2025-10-08 08:17:35 +0000 UTC" firstStartedPulling="2025-10-08 08:17:37.159339678 +0000 UTC m=+6200.289032319" lastFinishedPulling="2025-10-08 08:17:39.780582527 +0000 UTC m=+6202.910275138" observedRunningTime="2025-10-08 08:17:40.221017422 +0000 UTC m=+6203.350710033" watchObservedRunningTime="2025-10-08 08:17:40.507661579 +0000 UTC m=+6203.637354180" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.518591 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-ovndb-tls-certs\") pod \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.518667 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data-merged\") pod \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.518725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data\") pod \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.518905 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-combined-ca-bundle\") pod \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.518958 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-octavia-run\") pod \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.519025 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-scripts\") pod \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\" (UID: \"f30cd577-4f62-4f09-a3e0-d46c097b6b59\") " Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.529063 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-scripts" (OuterVolumeSpecName: "scripts") pod "f30cd577-4f62-4f09-a3e0-d46c097b6b59" (UID: "f30cd577-4f62-4f09-a3e0-d46c097b6b59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.529085 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data" (OuterVolumeSpecName: "config-data") pod "f30cd577-4f62-4f09-a3e0-d46c097b6b59" (UID: "f30cd577-4f62-4f09-a3e0-d46c097b6b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.538998 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "f30cd577-4f62-4f09-a3e0-d46c097b6b59" (UID: "f30cd577-4f62-4f09-a3e0-d46c097b6b59"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.572582 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "f30cd577-4f62-4f09-a3e0-d46c097b6b59" (UID: "f30cd577-4f62-4f09-a3e0-d46c097b6b59"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.584431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f30cd577-4f62-4f09-a3e0-d46c097b6b59" (UID: "f30cd577-4f62-4f09-a3e0-d46c097b6b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.621201 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.621229 4958 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-octavia-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.621238 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.621248 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data-merged\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.621255 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.678194 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f30cd577-4f62-4f09-a3e0-d46c097b6b59" (UID: "f30cd577-4f62-4f09-a3e0-d46c097b6b59"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:17:40 crc kubenswrapper[4958]: I1008 08:17:40.723149 4958 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30cd577-4f62-4f09-a3e0-d46c097b6b59-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.212177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6547b87d77-bg7gj" event={"ID":"f30cd577-4f62-4f09-a3e0-d46c097b6b59","Type":"ContainerDied","Data":"8acd4b72a0f65249d6a4c26f3bdb8619be8056b21f40990f03185dbdc3992d19"} Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.212239 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6547b87d77-bg7gj" Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.213556 4958 scope.go:117] "RemoveContainer" containerID="1df7135646bfc157a27e7356c3ff4decdd5c0f9a503eea8f3f8dd3d726695fda" Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.236723 4958 scope.go:117] "RemoveContainer" containerID="fc06ccae025db8a190d5e111f5f046f487b0a7ef34dba346ac1d1eca4e02edf4" Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.254989 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6547b87d77-bg7gj"] Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.262467 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-6547b87d77-bg7gj"] Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.266984 4958 scope.go:117] "RemoveContainer" containerID="5fcd6d611cced66ccf698425de4dbd960a21d9d180c4cb801806ee2703456a31" Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.594071 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="024fc63d-9b6c-4340-bdff-a77564c3e311" path="/var/lib/kubelet/pods/024fc63d-9b6c-4340-bdff-a77564c3e311/volumes" Oct 08 08:17:41 crc kubenswrapper[4958]: I1008 08:17:41.595144 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" path="/var/lib/kubelet/pods/f30cd577-4f62-4f09-a3e0-d46c097b6b59/volumes" Oct 08 08:17:46 crc kubenswrapper[4958]: I1008 08:17:46.090492 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:46 crc kubenswrapper[4958]: I1008 08:17:46.091075 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:46 crc kubenswrapper[4958]: I1008 08:17:46.146997 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:46 crc kubenswrapper[4958]: I1008 08:17:46.339903 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:48 crc kubenswrapper[4958]: I1008 08:17:48.509182 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4qjw6"] Oct 08 08:17:48 crc kubenswrapper[4958]: I1008 08:17:48.509774 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4qjw6" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="registry-server" containerID="cri-o://ce51087ec1596d3c34854e85180eec2d0e461a9f024f5ee0467e1849d9015c68" gracePeriod=2 Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.302211 4958 generic.go:334] "Generic (PLEG): container finished" podID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerID="ce51087ec1596d3c34854e85180eec2d0e461a9f024f5ee0467e1849d9015c68" exitCode=0 Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.302276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerDied","Data":"ce51087ec1596d3c34854e85180eec2d0e461a9f024f5ee0467e1849d9015c68"} Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.644362 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.697523 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-utilities\") pod \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.697596 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57nlk\" (UniqueName: \"kubernetes.io/projected/e4b0f060-a5e3-4634-a9a8-a81635d0579f-kube-api-access-57nlk\") pod \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.697684 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-catalog-content\") pod \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\" (UID: \"e4b0f060-a5e3-4634-a9a8-a81635d0579f\") " Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.698823 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-utilities" (OuterVolumeSpecName: "utilities") pod "e4b0f060-a5e3-4634-a9a8-a81635d0579f" (UID: "e4b0f060-a5e3-4634-a9a8-a81635d0579f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.712529 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b0f060-a5e3-4634-a9a8-a81635d0579f-kube-api-access-57nlk" (OuterVolumeSpecName: "kube-api-access-57nlk") pod "e4b0f060-a5e3-4634-a9a8-a81635d0579f" (UID: "e4b0f060-a5e3-4634-a9a8-a81635d0579f"). InnerVolumeSpecName "kube-api-access-57nlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.747130 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4b0f060-a5e3-4634-a9a8-a81635d0579f" (UID: "e4b0f060-a5e3-4634-a9a8-a81635d0579f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.803871 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.803979 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57nlk\" (UniqueName: \"kubernetes.io/projected/e4b0f060-a5e3-4634-a9a8-a81635d0579f-kube-api-access-57nlk\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:49 crc kubenswrapper[4958]: I1008 08:17:49.804009 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4b0f060-a5e3-4634-a9a8-a81635d0579f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.341402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qjw6" event={"ID":"e4b0f060-a5e3-4634-a9a8-a81635d0579f","Type":"ContainerDied","Data":"5c2f79e056165b7975fac3234e7a117736f8c12f38c54e6e00eb3f081ed693d2"} Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.341486 4958 scope.go:117] "RemoveContainer" containerID="ce51087ec1596d3c34854e85180eec2d0e461a9f024f5ee0467e1849d9015c68" Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.341669 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qjw6" Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.375392 4958 scope.go:117] "RemoveContainer" containerID="a239515479bdd67b939f1e0ec899a8a57aaa9cd27b9f4d35fb253b1c41ef88a2" Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.407899 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4qjw6"] Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.422255 4958 scope.go:117] "RemoveContainer" containerID="07fce00142d6436cd35f343676a8cd512c5f88425ec9966eb55c51a8e8f57244" Oct 08 08:17:50 crc kubenswrapper[4958]: I1008 08:17:50.426541 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4qjw6"] Oct 08 08:17:51 crc kubenswrapper[4958]: I1008 08:17:51.577335 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:17:51 crc kubenswrapper[4958]: E1008 08:17:51.578052 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:17:51 crc kubenswrapper[4958]: I1008 08:17:51.595243 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" path="/var/lib/kubelet/pods/e4b0f060-a5e3-4634-a9a8-a81635d0579f/volumes" Oct 08 08:18:02 crc kubenswrapper[4958]: I1008 08:18:02.577828 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:18:02 crc kubenswrapper[4958]: E1008 08:18:02.579223 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:18:04 crc kubenswrapper[4958]: I1008 08:18:04.667918 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-678599687f-wt76n"] Oct 08 08:18:04 crc kubenswrapper[4958]: I1008 08:18:04.668527 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-678599687f-wt76n" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerName="octavia-amphora-httpd" containerID="cri-o://0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270" gracePeriod=30 Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.467545 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.549631 4958 generic.go:334] "Generic (PLEG): container finished" podID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerID="0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270" exitCode=0 Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.549855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-wt76n" event={"ID":"ff98a8f1-d9db-49a8-a16c-e6536f70f86c","Type":"ContainerDied","Data":"0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270"} Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.549882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-wt76n" event={"ID":"ff98a8f1-d9db-49a8-a16c-e6536f70f86c","Type":"ContainerDied","Data":"871f4ac25ef52055bc1ac6e3ad40df62ec347e858ccd1e0e77a11dbaf7e513a5"} Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.549900 4958 scope.go:117] "RemoveContainer" containerID="0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.549895 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-wt76n" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.577892 4958 scope.go:117] "RemoveContainer" containerID="df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.593572 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-amphora-image\") pod \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.593662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-httpd-config\") pod \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\" (UID: \"ff98a8f1-d9db-49a8-a16c-e6536f70f86c\") " Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.615669 4958 scope.go:117] "RemoveContainer" containerID="0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270" Oct 08 08:18:05 crc kubenswrapper[4958]: E1008 08:18:05.617290 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270\": container with ID starting with 0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270 not found: ID does not exist" containerID="0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.617336 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270"} err="failed to get container status \"0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270\": rpc error: code = NotFound desc = could not find container \"0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270\": container with ID starting with 0d430cf8777e66393e7b96a4be4ff756122e463647dff7b79c9ac5c4eec6b270 not found: ID does not exist" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.617360 4958 scope.go:117] "RemoveContainer" containerID="df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510" Oct 08 08:18:05 crc kubenswrapper[4958]: E1008 08:18:05.618204 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510\": container with ID starting with df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510 not found: ID does not exist" containerID="df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.618352 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510"} err="failed to get container status \"df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510\": rpc error: code = NotFound desc = could not find container \"df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510\": container with ID starting with df97fb9209ee3d1b3ee22825bd6988de71b3155727a07584415ab831d23ec510 not found: ID does not exist" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.624773 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ff98a8f1-d9db-49a8-a16c-e6536f70f86c" (UID: "ff98a8f1-d9db-49a8-a16c-e6536f70f86c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.687445 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "ff98a8f1-d9db-49a8-a16c-e6536f70f86c" (UID: "ff98a8f1-d9db-49a8-a16c-e6536f70f86c"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.696562 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.696597 4958 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ff98a8f1-d9db-49a8-a16c-e6536f70f86c-amphora-image\") on node \"crc\" DevicePath \"\"" Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.891732 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-678599687f-wt76n"] Oct 08 08:18:05 crc kubenswrapper[4958]: I1008 08:18:05.906199 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-678599687f-wt76n"] Oct 08 08:18:07 crc kubenswrapper[4958]: I1008 08:18:07.604234 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" path="/var/lib/kubelet/pods/ff98a8f1-d9db-49a8-a16c-e6536f70f86c/volumes" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.043261 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hpm46"] Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.054559 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hpm46"] Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.309456 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-678599687f-7w6sz"] Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.310588 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerName="octavia-amphora-httpd" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.310685 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerName="octavia-amphora-httpd" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.310755 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="registry-server" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.310811 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="registry-server" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.310873 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerName="init" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.310932 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerName="init" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.311046 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.311116 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.311215 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api-provider-agent" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.311286 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api-provider-agent" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.311356 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="init" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.311409 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="init" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.311483 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="extract-utilities" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.311535 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="extract-utilities" Oct 08 08:18:11 crc kubenswrapper[4958]: E1008 08:18:11.311608 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="extract-content" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.311664 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="extract-content" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.311923 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api-provider-agent" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.312097 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b0f060-a5e3-4634-a9a8-a81635d0579f" containerName="registry-server" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.312179 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30cd577-4f62-4f09-a3e0-d46c097b6b59" containerName="octavia-api" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.312254 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff98a8f1-d9db-49a8-a16c-e6536f70f86c" containerName="octavia-amphora-httpd" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.313380 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.318800 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.322716 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/efb7eef5-76f6-43d9-946f-d0e6091ed0da-amphora-image\") pod \"octavia-image-upload-678599687f-7w6sz\" (UID: \"efb7eef5-76f6-43d9-946f-d0e6091ed0da\") " pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.322803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/efb7eef5-76f6-43d9-946f-d0e6091ed0da-httpd-config\") pod \"octavia-image-upload-678599687f-7w6sz\" (UID: \"efb7eef5-76f6-43d9-946f-d0e6091ed0da\") " pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.328878 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-7w6sz"] Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.424113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/efb7eef5-76f6-43d9-946f-d0e6091ed0da-httpd-config\") pod \"octavia-image-upload-678599687f-7w6sz\" (UID: \"efb7eef5-76f6-43d9-946f-d0e6091ed0da\") " pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.424309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/efb7eef5-76f6-43d9-946f-d0e6091ed0da-amphora-image\") pod \"octavia-image-upload-678599687f-7w6sz\" (UID: \"efb7eef5-76f6-43d9-946f-d0e6091ed0da\") " pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.424941 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/efb7eef5-76f6-43d9-946f-d0e6091ed0da-amphora-image\") pod \"octavia-image-upload-678599687f-7w6sz\" (UID: \"efb7eef5-76f6-43d9-946f-d0e6091ed0da\") " pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.441304 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/efb7eef5-76f6-43d9-946f-d0e6091ed0da-httpd-config\") pod \"octavia-image-upload-678599687f-7w6sz\" (UID: \"efb7eef5-76f6-43d9-946f-d0e6091ed0da\") " pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.600454 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf7b6cf-6c29-48f9-b221-572a5dfd3411" path="/var/lib/kubelet/pods/8cf7b6cf-6c29-48f9-b221-572a5dfd3411/volumes" Oct 08 08:18:11 crc kubenswrapper[4958]: I1008 08:18:11.651823 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-678599687f-7w6sz" Oct 08 08:18:12 crc kubenswrapper[4958]: I1008 08:18:12.184586 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-678599687f-7w6sz"] Oct 08 08:18:12 crc kubenswrapper[4958]: W1008 08:18:12.193129 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefb7eef5_76f6_43d9_946f_d0e6091ed0da.slice/crio-d6202004970f1f86b2096e37c46f1e687dbf34e810576372aa4ee0d84261016d WatchSource:0}: Error finding container d6202004970f1f86b2096e37c46f1e687dbf34e810576372aa4ee0d84261016d: Status 404 returned error can't find the container with id d6202004970f1f86b2096e37c46f1e687dbf34e810576372aa4ee0d84261016d Oct 08 08:18:12 crc kubenswrapper[4958]: I1008 08:18:12.639694 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-7w6sz" event={"ID":"efb7eef5-76f6-43d9-946f-d0e6091ed0da","Type":"ContainerStarted","Data":"d6202004970f1f86b2096e37c46f1e687dbf34e810576372aa4ee0d84261016d"} Oct 08 08:18:13 crc kubenswrapper[4958]: I1008 08:18:13.654862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-7w6sz" event={"ID":"efb7eef5-76f6-43d9-946f-d0e6091ed0da","Type":"ContainerStarted","Data":"e59c05f41e9d77be845876022c276f7dfc1eaf21f671de7625154cca6e9983d4"} Oct 08 08:18:14 crc kubenswrapper[4958]: I1008 08:18:14.671523 4958 generic.go:334] "Generic (PLEG): container finished" podID="efb7eef5-76f6-43d9-946f-d0e6091ed0da" containerID="e59c05f41e9d77be845876022c276f7dfc1eaf21f671de7625154cca6e9983d4" exitCode=0 Oct 08 08:18:14 crc kubenswrapper[4958]: I1008 08:18:14.671628 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-7w6sz" event={"ID":"efb7eef5-76f6-43d9-946f-d0e6091ed0da","Type":"ContainerDied","Data":"e59c05f41e9d77be845876022c276f7dfc1eaf21f671de7625154cca6e9983d4"} Oct 08 08:18:15 crc kubenswrapper[4958]: I1008 08:18:15.693931 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-678599687f-7w6sz" event={"ID":"efb7eef5-76f6-43d9-946f-d0e6091ed0da","Type":"ContainerStarted","Data":"d3f82c7606af8ecc24072bcf29142b9a529ef08f528307560acdec176a523b63"} Oct 08 08:18:15 crc kubenswrapper[4958]: I1008 08:18:15.723061 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-678599687f-7w6sz" podStartSLOduration=4.294146211 podStartE2EDuration="4.723031873s" podCreationTimestamp="2025-10-08 08:18:11 +0000 UTC" firstStartedPulling="2025-10-08 08:18:12.197596962 +0000 UTC m=+6235.327289563" lastFinishedPulling="2025-10-08 08:18:12.626482624 +0000 UTC m=+6235.756175225" observedRunningTime="2025-10-08 08:18:15.719093906 +0000 UTC m=+6238.848786567" watchObservedRunningTime="2025-10-08 08:18:15.723031873 +0000 UTC m=+6238.852724494" Oct 08 08:18:16 crc kubenswrapper[4958]: I1008 08:18:16.577204 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:18:16 crc kubenswrapper[4958]: E1008 08:18:16.577993 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.317652 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-jdcv4"] Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.323514 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.326788 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.327407 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.327518 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.342004 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-jdcv4"] Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.427610 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-config-data\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.427682 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-scripts\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.427731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-amphora-certs\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.427780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e9500a2f-96a7-47bb-a498-d0b695ae541f-hm-ports\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.428041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e9500a2f-96a7-47bb-a498-d0b695ae541f-config-data-merged\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.428141 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-combined-ca-bundle\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.530636 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-scripts\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.530733 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-amphora-certs\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.530816 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e9500a2f-96a7-47bb-a498-d0b695ae541f-hm-ports\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.530914 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e9500a2f-96a7-47bb-a498-d0b695ae541f-config-data-merged\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.531005 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-combined-ca-bundle\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.531150 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-config-data\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.531839 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e9500a2f-96a7-47bb-a498-d0b695ae541f-config-data-merged\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.532770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e9500a2f-96a7-47bb-a498-d0b695ae541f-hm-ports\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.536895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-scripts\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.537589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-amphora-certs\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.538032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-combined-ca-bundle\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.538395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9500a2f-96a7-47bb-a498-d0b695ae541f-config-data\") pod \"octavia-healthmanager-jdcv4\" (UID: \"e9500a2f-96a7-47bb-a498-d0b695ae541f\") " pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:19 crc kubenswrapper[4958]: I1008 08:18:19.643593 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:20 crc kubenswrapper[4958]: I1008 08:18:20.332667 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-jdcv4"] Oct 08 08:18:20 crc kubenswrapper[4958]: W1008 08:18:20.338977 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9500a2f_96a7_47bb_a498_d0b695ae541f.slice/crio-3ff1461d54e4103ada92021f0b3a5d9ebccd752f2cdac6849308a139b99b72dd WatchSource:0}: Error finding container 3ff1461d54e4103ada92021f0b3a5d9ebccd752f2cdac6849308a139b99b72dd: Status 404 returned error can't find the container with id 3ff1461d54e4103ada92021f0b3a5d9ebccd752f2cdac6849308a139b99b72dd Oct 08 08:18:20 crc kubenswrapper[4958]: I1008 08:18:20.772015 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdcv4" event={"ID":"e9500a2f-96a7-47bb-a498-d0b695ae541f","Type":"ContainerStarted","Data":"3ff1461d54e4103ada92021f0b3a5d9ebccd752f2cdac6849308a139b99b72dd"} Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.031170 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-180c-account-create-tpz8x"] Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.047206 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-180c-account-create-tpz8x"] Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.536396 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-qcctv"] Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.538260 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.540176 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.540659 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.550622 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-qcctv"] Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.585621 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e8bf5d-380f-471d-8a08-85b41730a075" path="/var/lib/kubelet/pods/f8e8bf5d-380f-471d-8a08-85b41730a075/volumes" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.669930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-config-data\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.670173 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2a4902a5-b947-44c9-a2f0-c97c277e6899-config-data-merged\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.670211 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-combined-ca-bundle\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.670256 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2a4902a5-b947-44c9-a2f0-c97c277e6899-hm-ports\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.670518 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-scripts\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.670547 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-amphora-certs\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.772646 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-config-data\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.772787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2a4902a5-b947-44c9-a2f0-c97c277e6899-config-data-merged\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.772804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-combined-ca-bundle\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.772830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2a4902a5-b947-44c9-a2f0-c97c277e6899-hm-ports\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.772873 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-scripts\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.772892 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-amphora-certs\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.775056 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2a4902a5-b947-44c9-a2f0-c97c277e6899-hm-ports\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.780785 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-config-data\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.781177 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2a4902a5-b947-44c9-a2f0-c97c277e6899-config-data-merged\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.781979 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-amphora-certs\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.784495 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-scripts\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.792835 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdcv4" event={"ID":"e9500a2f-96a7-47bb-a498-d0b695ae541f","Type":"ContainerStarted","Data":"4e7c0d1158b803775bfb2f0b313a3e6ec1ee229e7107f1f66b6044c1fea830e9"} Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.795973 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4902a5-b947-44c9-a2f0-c97c277e6899-combined-ca-bundle\") pod \"octavia-housekeeping-qcctv\" (UID: \"2a4902a5-b947-44c9-a2f0-c97c277e6899\") " pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:21 crc kubenswrapper[4958]: I1008 08:18:21.857167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:22 crc kubenswrapper[4958]: I1008 08:18:22.447223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-qcctv"] Oct 08 08:18:22 crc kubenswrapper[4958]: W1008 08:18:22.450133 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a4902a5_b947_44c9_a2f0_c97c277e6899.slice/crio-58155ae3679a0afbb3e45a3fa26e05f7179cf9c1840128f027e80f3b5131e9f9 WatchSource:0}: Error finding container 58155ae3679a0afbb3e45a3fa26e05f7179cf9c1840128f027e80f3b5131e9f9: Status 404 returned error can't find the container with id 58155ae3679a0afbb3e45a3fa26e05f7179cf9c1840128f027e80f3b5131e9f9 Oct 08 08:18:22 crc kubenswrapper[4958]: I1008 08:18:22.802331 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9500a2f-96a7-47bb-a498-d0b695ae541f" containerID="4e7c0d1158b803775bfb2f0b313a3e6ec1ee229e7107f1f66b6044c1fea830e9" exitCode=0 Oct 08 08:18:22 crc kubenswrapper[4958]: I1008 08:18:22.802394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdcv4" event={"ID":"e9500a2f-96a7-47bb-a498-d0b695ae541f","Type":"ContainerDied","Data":"4e7c0d1158b803775bfb2f0b313a3e6ec1ee229e7107f1f66b6044c1fea830e9"} Oct 08 08:18:22 crc kubenswrapper[4958]: I1008 08:18:22.803802 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-qcctv" event={"ID":"2a4902a5-b947-44c9-a2f0-c97c277e6899","Type":"ContainerStarted","Data":"58155ae3679a0afbb3e45a3fa26e05f7179cf9c1840128f027e80f3b5131e9f9"} Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.107752 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-dsclk"] Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.109715 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.114044 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.119258 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.119725 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-dsclk"] Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.201202 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-config-data\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.201298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-scripts\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.201370 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-combined-ca-bundle\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.201591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fe515cb3-71bb-4ce8-affe-db0501826bce-config-data-merged\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.201692 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/fe515cb3-71bb-4ce8-affe-db0501826bce-hm-ports\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.201768 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-amphora-certs\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.311667 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fe515cb3-71bb-4ce8-affe-db0501826bce-config-data-merged\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.311727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/fe515cb3-71bb-4ce8-affe-db0501826bce-hm-ports\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.311757 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-amphora-certs\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.311878 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-config-data\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.311937 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-scripts\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.312024 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-combined-ca-bundle\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.312236 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fe515cb3-71bb-4ce8-affe-db0501826bce-config-data-merged\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.313047 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/fe515cb3-71bb-4ce8-affe-db0501826bce-hm-ports\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.318175 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-combined-ca-bundle\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.318319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-config-data\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.318480 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-scripts\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.318539 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/fe515cb3-71bb-4ce8-affe-db0501826bce-amphora-certs\") pod \"octavia-worker-dsclk\" (UID: \"fe515cb3-71bb-4ce8-affe-db0501826bce\") " pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.439584 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-dsclk" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.819529 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-jdcv4" event={"ID":"e9500a2f-96a7-47bb-a498-d0b695ae541f","Type":"ContainerStarted","Data":"3ddc0382133e7462ee084fbce66423c0012d3e0c270874de75220a2cb1ff3409"} Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.819893 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.844649 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-jdcv4" podStartSLOduration=4.84463108 podStartE2EDuration="4.84463108s" podCreationTimestamp="2025-10-08 08:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:18:23.83505396 +0000 UTC m=+6246.964746581" watchObservedRunningTime="2025-10-08 08:18:23.84463108 +0000 UTC m=+6246.974323681" Oct 08 08:18:23 crc kubenswrapper[4958]: I1008 08:18:23.989405 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-dsclk"] Oct 08 08:18:24 crc kubenswrapper[4958]: I1008 08:18:24.837389 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-dsclk" event={"ID":"fe515cb3-71bb-4ce8-affe-db0501826bce","Type":"ContainerStarted","Data":"4f49cab5a17ae0f3c25521490f71d1db0f2a74ffcbd48ce0ab70fce4e79cda28"} Oct 08 08:18:25 crc kubenswrapper[4958]: I1008 08:18:25.854859 4958 generic.go:334] "Generic (PLEG): container finished" podID="2a4902a5-b947-44c9-a2f0-c97c277e6899" containerID="0fa49b02df6f35cb9f235ab7b8831de02a607c70e0378df709469352152c0c5a" exitCode=0 Oct 08 08:18:25 crc kubenswrapper[4958]: I1008 08:18:25.854972 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-qcctv" event={"ID":"2a4902a5-b947-44c9-a2f0-c97c277e6899","Type":"ContainerDied","Data":"0fa49b02df6f35cb9f235ab7b8831de02a607c70e0378df709469352152c0c5a"} Oct 08 08:18:26 crc kubenswrapper[4958]: I1008 08:18:26.865452 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-dsclk" event={"ID":"fe515cb3-71bb-4ce8-affe-db0501826bce","Type":"ContainerStarted","Data":"40b66d3290008b63d8cf787f2592df42af914e65d949e6d3dafb73357dfce882"} Oct 08 08:18:26 crc kubenswrapper[4958]: I1008 08:18:26.868275 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-qcctv" event={"ID":"2a4902a5-b947-44c9-a2f0-c97c277e6899","Type":"ContainerStarted","Data":"a7b63584c5e09be75c22cf6d6ef82f3101e89ca56a0e98acb6137a753d06bd54"} Oct 08 08:18:26 crc kubenswrapper[4958]: I1008 08:18:26.868406 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:26 crc kubenswrapper[4958]: I1008 08:18:26.906827 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-qcctv" podStartSLOduration=4.183016546 podStartE2EDuration="5.906808567s" podCreationTimestamp="2025-10-08 08:18:21 +0000 UTC" firstStartedPulling="2025-10-08 08:18:22.455891768 +0000 UTC m=+6245.585584369" lastFinishedPulling="2025-10-08 08:18:24.179683789 +0000 UTC m=+6247.309376390" observedRunningTime="2025-10-08 08:18:26.903879538 +0000 UTC m=+6250.033572139" watchObservedRunningTime="2025-10-08 08:18:26.906808567 +0000 UTC m=+6250.036501188" Oct 08 08:18:27 crc kubenswrapper[4958]: I1008 08:18:27.586239 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:18:27 crc kubenswrapper[4958]: E1008 08:18:27.587103 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:18:27 crc kubenswrapper[4958]: I1008 08:18:27.878937 4958 generic.go:334] "Generic (PLEG): container finished" podID="fe515cb3-71bb-4ce8-affe-db0501826bce" containerID="40b66d3290008b63d8cf787f2592df42af914e65d949e6d3dafb73357dfce882" exitCode=0 Oct 08 08:18:27 crc kubenswrapper[4958]: I1008 08:18:27.880794 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-dsclk" event={"ID":"fe515cb3-71bb-4ce8-affe-db0501826bce","Type":"ContainerDied","Data":"40b66d3290008b63d8cf787f2592df42af914e65d949e6d3dafb73357dfce882"} Oct 08 08:18:28 crc kubenswrapper[4958]: I1008 08:18:28.899023 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-dsclk" event={"ID":"fe515cb3-71bb-4ce8-affe-db0501826bce","Type":"ContainerStarted","Data":"edc2e6ca10a3916b58bea6671d9def8511745b1ef4754a7d512508f7c3541304"} Oct 08 08:18:28 crc kubenswrapper[4958]: I1008 08:18:28.900027 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-dsclk" Oct 08 08:18:28 crc kubenswrapper[4958]: I1008 08:18:28.929815 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-dsclk" podStartSLOduration=4.463424471 podStartE2EDuration="5.929794325s" podCreationTimestamp="2025-10-08 08:18:23 +0000 UTC" firstStartedPulling="2025-10-08 08:18:24.141519735 +0000 UTC m=+6247.271212336" lastFinishedPulling="2025-10-08 08:18:25.607889589 +0000 UTC m=+6248.737582190" observedRunningTime="2025-10-08 08:18:28.926752003 +0000 UTC m=+6252.056444644" watchObservedRunningTime="2025-10-08 08:18:28.929794325 +0000 UTC m=+6252.059486936" Oct 08 08:18:29 crc kubenswrapper[4958]: I1008 08:18:29.034085 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-57cqr"] Oct 08 08:18:29 crc kubenswrapper[4958]: I1008 08:18:29.045714 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-57cqr"] Oct 08 08:18:29 crc kubenswrapper[4958]: I1008 08:18:29.608505 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed71f6e-5973-44b5-ab2b-34be01d43eef" path="/var/lib/kubelet/pods/3ed71f6e-5973-44b5-ab2b-34be01d43eef/volumes" Oct 08 08:18:34 crc kubenswrapper[4958]: I1008 08:18:34.687668 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-jdcv4" Oct 08 08:18:36 crc kubenswrapper[4958]: I1008 08:18:36.885844 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-qcctv" Oct 08 08:18:38 crc kubenswrapper[4958]: I1008 08:18:38.476495 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-dsclk" Oct 08 08:18:40 crc kubenswrapper[4958]: I1008 08:18:40.614640 4958 scope.go:117] "RemoveContainer" containerID="36c0053045ab46b874c5d29275df331798910d25638fe62a43a11775961e571a" Oct 08 08:18:40 crc kubenswrapper[4958]: I1008 08:18:40.658401 4958 scope.go:117] "RemoveContainer" containerID="e9294a77a0841f8fb76d29ba84f0382c66a834df8da5b48c39e6c842a9a2eba8" Oct 08 08:18:40 crc kubenswrapper[4958]: I1008 08:18:40.718932 4958 scope.go:117] "RemoveContainer" containerID="cdadd2ccafa80ae4a5c4e3e827f7319432b72cc9b9eaa25a4eaa51377a6b14c1" Oct 08 08:18:40 crc kubenswrapper[4958]: I1008 08:18:40.775450 4958 scope.go:117] "RemoveContainer" containerID="39e7e9af11fbb8bf5c57151bd4afd0b942fb3d2b3a0282fea792099932cdfe7b" Oct 08 08:18:42 crc kubenswrapper[4958]: I1008 08:18:42.576829 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:18:42 crc kubenswrapper[4958]: E1008 08:18:42.577797 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:18:54 crc kubenswrapper[4958]: I1008 08:18:54.576946 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:18:54 crc kubenswrapper[4958]: E1008 08:18:54.578233 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:19:07 crc kubenswrapper[4958]: I1008 08:19:07.593548 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:19:07 crc kubenswrapper[4958]: E1008 08:19:07.596878 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:19:22 crc kubenswrapper[4958]: I1008 08:19:22.577935 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:19:22 crc kubenswrapper[4958]: E1008 08:19:22.578683 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:19:35 crc kubenswrapper[4958]: I1008 08:19:35.576498 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:19:35 crc kubenswrapper[4958]: E1008 08:19:35.577229 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.782819 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86ddbdcb65-zgzvw"] Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.786531 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.789061 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.789639 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.789749 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.789885 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-77v2k" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.797631 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86ddbdcb65-zgzvw"] Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.837038 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.837331 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-log" containerID="cri-o://0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c" gracePeriod=30 Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.837423 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-httpd" containerID="cri-o://42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401" gracePeriod=30 Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.894265 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c89555865-kchbz"] Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.900559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-config-data\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.900699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-scripts\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.900820 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-logs\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.900938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kmlm\" (UniqueName: \"kubernetes.io/projected/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-kube-api-access-5kmlm\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.901025 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-horizon-secret-key\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.902095 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.923255 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c89555865-kchbz"] Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.943522 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.943815 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-log" containerID="cri-o://37e535dfc52bf6b506a278b1310e2acabb4d573da202348a5b666a21c4b1e3fb" gracePeriod=30 Oct 08 08:19:37 crc kubenswrapper[4958]: I1008 08:19:37.944018 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-httpd" containerID="cri-o://2a2fa833c7b91b59cdec46ffb28f02c165e7e45d3df20547f34a00275ab8885c" gracePeriod=30 Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.004506 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-config-data\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.004584 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-logs\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.004660 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kmlm\" (UniqueName: \"kubernetes.io/projected/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-kube-api-access-5kmlm\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.004690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90684036-aca5-4f05-b226-39182028d509-logs\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.005060 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-horizon-secret-key\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.006063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90684036-aca5-4f05-b226-39182028d509-horizon-secret-key\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.005118 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-logs\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.006151 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-scripts\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.006229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b7fl\" (UniqueName: \"kubernetes.io/projected/90684036-aca5-4f05-b226-39182028d509-kube-api-access-6b7fl\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.006313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-config-data\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.006392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-scripts\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.011341 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-horizon-secret-key\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.012257 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-scripts\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.012693 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-config-data\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.021249 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kmlm\" (UniqueName: \"kubernetes.io/projected/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-kube-api-access-5kmlm\") pod \"horizon-86ddbdcb65-zgzvw\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.106001 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.109465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-config-data\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.109533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-config-data\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.109634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90684036-aca5-4f05-b226-39182028d509-logs\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.109991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90684036-aca5-4f05-b226-39182028d509-logs\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.110076 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90684036-aca5-4f05-b226-39182028d509-horizon-secret-key\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.110162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-scripts\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.110222 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7fl\" (UniqueName: \"kubernetes.io/projected/90684036-aca5-4f05-b226-39182028d509-kube-api-access-6b7fl\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.110788 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-scripts\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.115452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90684036-aca5-4f05-b226-39182028d509-horizon-secret-key\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.161215 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7fl\" (UniqueName: \"kubernetes.io/projected/90684036-aca5-4f05-b226-39182028d509-kube-api-access-6b7fl\") pod \"horizon-5c89555865-kchbz\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.227504 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.717309 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86ddbdcb65-zgzvw"] Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.754731 4958 generic.go:334] "Generic (PLEG): container finished" podID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerID="0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c" exitCode=143 Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.754829 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a","Type":"ContainerDied","Data":"0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c"} Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.755939 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ddbdcb65-zgzvw" event={"ID":"1ea96372-3600-4d56-8c12-3cb7aeca1fe4","Type":"ContainerStarted","Data":"95273b18713118d638330c0f0fbe39cfd475e14813907cb903e0b1a82bc9eb97"} Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.758162 4958 generic.go:334] "Generic (PLEG): container finished" podID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerID="37e535dfc52bf6b506a278b1310e2acabb4d573da202348a5b666a21c4b1e3fb" exitCode=143 Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.758215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b43f2f2a-3d0f-47c5-ae82-f41c92f92940","Type":"ContainerDied","Data":"37e535dfc52bf6b506a278b1310e2acabb4d573da202348a5b666a21c4b1e3fb"} Oct 08 08:19:38 crc kubenswrapper[4958]: I1008 08:19:38.869898 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c89555865-kchbz"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.702360 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c89555865-kchbz"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.727471 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-676fb56878-mj2b7"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.728985 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.731573 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.769835 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676fb56878-mj2b7"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.800115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c89555865-kchbz" event={"ID":"90684036-aca5-4f05-b226-39182028d509","Type":"ContainerStarted","Data":"eecf1372d605c34b02a10bee2248d37b86007f7593ca5c8ed015457b375cd914"} Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.837153 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86ddbdcb65-zgzvw"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.858851 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-secret-key\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.859218 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-scripts\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.859258 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-tls-certs\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.859313 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-config-data\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.859354 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a461c706-cb38-4f48-bd86-1ebcbda38d60-logs\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.859481 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-combined-ca-bundle\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.859534 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7fhr\" (UniqueName: \"kubernetes.io/projected/a461c706-cb38-4f48-bd86-1ebcbda38d60-kube-api-access-v7fhr\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.867050 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b6f87784b-7wwr6"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.875660 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.894984 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b6f87784b-7wwr6"] Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962254 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-kube-api-access-v7w9j\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962358 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-logs\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962388 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-scripts\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-scripts\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962441 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-tls-certs\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-config-data\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-config-data\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962508 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a461c706-cb38-4f48-bd86-1ebcbda38d60-logs\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-combined-ca-bundle\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7fhr\" (UniqueName: \"kubernetes.io/projected/a461c706-cb38-4f48-bd86-1ebcbda38d60-kube-api-access-v7fhr\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962587 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-secret-key\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962609 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-combined-ca-bundle\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962632 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-secret-key\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.962654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-tls-certs\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.963200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-scripts\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.963668 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a461c706-cb38-4f48-bd86-1ebcbda38d60-logs\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.964066 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-config-data\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.968989 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-combined-ca-bundle\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.970230 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-tls-certs\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.972921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-secret-key\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:39 crc kubenswrapper[4958]: I1008 08:19:39.977590 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7fhr\" (UniqueName: \"kubernetes.io/projected/a461c706-cb38-4f48-bd86-1ebcbda38d60-kube-api-access-v7fhr\") pod \"horizon-676fb56878-mj2b7\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064512 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-secret-key\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064581 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-combined-ca-bundle\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-tls-certs\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-kube-api-access-v7w9j\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-logs\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-scripts\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.064923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-config-data\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.066168 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-logs\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.066755 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-scripts\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.067247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-config-data\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.069456 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.070846 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-tls-certs\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.076389 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-combined-ca-bundle\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.077553 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-secret-key\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.082290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-kube-api-access-v7w9j\") pod \"horizon-7b6f87784b-7wwr6\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.247783 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.538372 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676fb56878-mj2b7"] Oct 08 08:19:40 crc kubenswrapper[4958]: W1008 08:19:40.544290 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda461c706_cb38_4f48_bd86_1ebcbda38d60.slice/crio-581fb2a9a9a2395d5cc7ef2f4aff5d45f3fce47d09949854e0865bc012c3c7b5 WatchSource:0}: Error finding container 581fb2a9a9a2395d5cc7ef2f4aff5d45f3fce47d09949854e0865bc012c3c7b5: Status 404 returned error can't find the container with id 581fb2a9a9a2395d5cc7ef2f4aff5d45f3fce47d09949854e0865bc012c3c7b5 Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.715009 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b6f87784b-7wwr6"] Oct 08 08:19:40 crc kubenswrapper[4958]: W1008 08:19:40.715359 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a81014_1f64_436a_b3e0_1f8f79e3a7a0.slice/crio-2233b86fbc6dbc4970d9cb0cf4258e71812e930d6b3b18a3ab12b3c7736d28f3 WatchSource:0}: Error finding container 2233b86fbc6dbc4970d9cb0cf4258e71812e930d6b3b18a3ab12b3c7736d28f3: Status 404 returned error can't find the container with id 2233b86fbc6dbc4970d9cb0cf4258e71812e930d6b3b18a3ab12b3c7736d28f3 Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.814258 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676fb56878-mj2b7" event={"ID":"a461c706-cb38-4f48-bd86-1ebcbda38d60","Type":"ContainerStarted","Data":"581fb2a9a9a2395d5cc7ef2f4aff5d45f3fce47d09949854e0865bc012c3c7b5"} Oct 08 08:19:40 crc kubenswrapper[4958]: I1008 08:19:40.816937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6f87784b-7wwr6" event={"ID":"13a81014-1f64-436a-b3e0-1f8f79e3a7a0","Type":"ContainerStarted","Data":"2233b86fbc6dbc4970d9cb0cf4258e71812e930d6b3b18a3ab12b3c7736d28f3"} Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.173855 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.66:9292/healthcheck\": read tcp 10.217.0.2:33832->10.217.1.66:9292: read: connection reset by peer" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.174282 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.66:9292/healthcheck\": read tcp 10.217.0.2:33816->10.217.1.66:9292: read: connection reset by peer" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.220094 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.65:9292/healthcheck\": read tcp 10.217.0.2:45766->10.217.1.65:9292: read: connection reset by peer" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.220114 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.65:9292/healthcheck\": read tcp 10.217.0.2:45768->10.217.1.65:9292: read: connection reset by peer" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.801573 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.847605 4958 generic.go:334] "Generic (PLEG): container finished" podID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerID="2a2fa833c7b91b59cdec46ffb28f02c165e7e45d3df20547f34a00275ab8885c" exitCode=0 Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.847674 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b43f2f2a-3d0f-47c5-ae82-f41c92f92940","Type":"ContainerDied","Data":"2a2fa833c7b91b59cdec46ffb28f02c165e7e45d3df20547f34a00275ab8885c"} Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.853589 4958 generic.go:334] "Generic (PLEG): container finished" podID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerID="42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401" exitCode=0 Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.853631 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a","Type":"ContainerDied","Data":"42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401"} Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.853658 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a","Type":"ContainerDied","Data":"361a63ddc5281589d7cda2ad8a00cc936b1059be4d3c09bb730eb4fbbf0e2c9b"} Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.853658 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.853685 4958 scope.go:117] "RemoveContainer" containerID="42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908186 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-logs\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m22p6\" (UniqueName: \"kubernetes.io/projected/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-kube-api-access-m22p6\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-internal-tls-certs\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908478 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-httpd-run\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908522 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-scripts\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908615 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-config-data\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908656 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-combined-ca-bundle\") pod \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\" (UID: \"4388f0e1-bbd7-4dbf-bf83-9fa4d209702a\") " Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.908774 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-logs" (OuterVolumeSpecName: "logs") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.909123 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.909802 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.909829 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.915218 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-scripts" (OuterVolumeSpecName: "scripts") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.916175 4958 scope.go:117] "RemoveContainer" containerID="0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.917176 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-kube-api-access-m22p6" (OuterVolumeSpecName: "kube-api-access-m22p6") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "kube-api-access-m22p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.942185 4958 scope.go:117] "RemoveContainer" containerID="42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401" Oct 08 08:19:41 crc kubenswrapper[4958]: E1008 08:19:41.943374 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401\": container with ID starting with 42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401 not found: ID does not exist" containerID="42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.943408 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401"} err="failed to get container status \"42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401\": rpc error: code = NotFound desc = could not find container \"42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401\": container with ID starting with 42ef3a8f9f3dc9d5cef6d0004bde410102251ede81062a659db088a7b73ea401 not found: ID does not exist" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.943431 4958 scope.go:117] "RemoveContainer" containerID="0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c" Oct 08 08:19:41 crc kubenswrapper[4958]: E1008 08:19:41.943700 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c\": container with ID starting with 0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c not found: ID does not exist" containerID="0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.943730 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c"} err="failed to get container status \"0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c\": rpc error: code = NotFound desc = could not find container \"0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c\": container with ID starting with 0e425be0e64a7767b1ca0ca3d1900f9d505c2ddb5672349d3a197e86b77ebc0c not found: ID does not exist" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.951038 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.972206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:41 crc kubenswrapper[4958]: I1008 08:19:41.982640 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-config-data" (OuterVolumeSpecName: "config-data") pod "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" (UID: "4388f0e1-bbd7-4dbf-bf83-9fa4d209702a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.012104 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.012140 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.012149 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.012159 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m22p6\" (UniqueName: \"kubernetes.io/projected/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-kube-api-access-m22p6\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.012169 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.188066 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.195397 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.214241 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: E1008 08:19:42.214670 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-log" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.214687 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-log" Oct 08 08:19:42 crc kubenswrapper[4958]: E1008 08:19:42.214736 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-httpd" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.214745 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-httpd" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.214986 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-httpd" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.215011 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" containerName="glance-log" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.216100 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.230313 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.230628 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.250322 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320074 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qtjw\" (UniqueName: \"kubernetes.io/projected/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-kube-api-access-9qtjw\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320178 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320280 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320333 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.320548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.335073 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421506 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-logs\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421612 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-combined-ca-bundle\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421693 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-public-tls-certs\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421734 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-scripts\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421805 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86fgd\" (UniqueName: \"kubernetes.io/projected/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-kube-api-access-86fgd\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421876 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-config-data\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.421915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-httpd-run\") pod \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\" (UID: \"b43f2f2a-3d0f-47c5-ae82-f41c92f92940\") " Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422113 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-logs" (OuterVolumeSpecName: "logs") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422197 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422256 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422295 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422328 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422343 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qtjw\" (UniqueName: \"kubernetes.io/projected/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-kube-api-access-9qtjw\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422494 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.422507 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.423400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.423456 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.427719 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.434855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.435479 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.438093 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-scripts" (OuterVolumeSpecName: "scripts") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.441093 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qtjw\" (UniqueName: \"kubernetes.io/projected/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-kube-api-access-9qtjw\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.441166 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1\") " pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.453573 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-kube-api-access-86fgd" (OuterVolumeSpecName: "kube-api-access-86fgd") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "kube-api-access-86fgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.463309 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.515367 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.520326 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-config-data" (OuterVolumeSpecName: "config-data") pod "b43f2f2a-3d0f-47c5-ae82-f41c92f92940" (UID: "b43f2f2a-3d0f-47c5-ae82-f41c92f92940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.524971 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.525020 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.525030 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.525040 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86fgd\" (UniqueName: \"kubernetes.io/projected/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-kube-api-access-86fgd\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.525050 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43f2f2a-3d0f-47c5-ae82-f41c92f92940-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.626502 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.871706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b43f2f2a-3d0f-47c5-ae82-f41c92f92940","Type":"ContainerDied","Data":"49c967394929e2255450b0e84bfcde51c361b79915cf0f9cefbc2b1b75b1be9e"} Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.871757 4958 scope.go:117] "RemoveContainer" containerID="2a2fa833c7b91b59cdec46ffb28f02c165e7e45d3df20547f34a00275ab8885c" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.871805 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.903822 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.936567 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.947056 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:19:42 crc kubenswrapper[4958]: E1008 08:19:42.947686 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-log" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.947703 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-log" Oct 08 08:19:42 crc kubenswrapper[4958]: E1008 08:19:42.947741 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-httpd" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.947750 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-httpd" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.948021 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-log" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.948053 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" containerName="glance-httpd" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.949406 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.956892 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.958228 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 08 08:19:42 crc kubenswrapper[4958]: I1008 08:19:42.975940 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.033871 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.033915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.033976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnw7n\" (UniqueName: \"kubernetes.io/projected/6a60cd39-56e8-4e32-845e-1a931ade509b-kube-api-access-nnw7n\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.034033 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a60cd39-56e8-4e32-845e-1a931ade509b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.034084 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a60cd39-56e8-4e32-845e-1a931ade509b-logs\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.034098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.034129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.138593 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.139297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.139365 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnw7n\" (UniqueName: \"kubernetes.io/projected/6a60cd39-56e8-4e32-845e-1a931ade509b-kube-api-access-nnw7n\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.139477 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a60cd39-56e8-4e32-845e-1a931ade509b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.139872 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a60cd39-56e8-4e32-845e-1a931ade509b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.140012 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a60cd39-56e8-4e32-845e-1a931ade509b-logs\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.140075 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.140120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.140808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a60cd39-56e8-4e32-845e-1a931ade509b-logs\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.144229 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-scripts\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.144914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.145862 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-config-data\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.158559 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a60cd39-56e8-4e32-845e-1a931ade509b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.165665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnw7n\" (UniqueName: \"kubernetes.io/projected/6a60cd39-56e8-4e32-845e-1a931ade509b-kube-api-access-nnw7n\") pod \"glance-default-external-api-0\" (UID: \"6a60cd39-56e8-4e32-845e-1a931ade509b\") " pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.334313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.588649 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4388f0e1-bbd7-4dbf-bf83-9fa4d209702a" path="/var/lib/kubelet/pods/4388f0e1-bbd7-4dbf-bf83-9fa4d209702a/volumes" Oct 08 08:19:43 crc kubenswrapper[4958]: I1008 08:19:43.590380 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43f2f2a-3d0f-47c5-ae82-f41c92f92940" path="/var/lib/kubelet/pods/b43f2f2a-3d0f-47c5-ae82-f41c92f92940/volumes" Oct 08 08:19:46 crc kubenswrapper[4958]: I1008 08:19:46.576764 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:19:48 crc kubenswrapper[4958]: I1008 08:19:48.130745 4958 scope.go:117] "RemoveContainer" containerID="37e535dfc52bf6b506a278b1310e2acabb4d573da202348a5b666a21c4b1e3fb" Oct 08 08:19:49 crc kubenswrapper[4958]: I1008 08:19:49.028031 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 08 08:19:49 crc kubenswrapper[4958]: I1008 08:19:49.104804 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.003929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1","Type":"ContainerStarted","Data":"27ce868fea332a2d9038ca413bbb822c01cc717276d39a7fd7945b03124d5095"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.004823 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1","Type":"ContainerStarted","Data":"870eb9f5e882113e51804717266b01fbb6c13ae271f6a1fcaf4da68966f9ca29"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.010296 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a60cd39-56e8-4e32-845e-1a931ade509b","Type":"ContainerStarted","Data":"457564c0885f3b87412c2d96fadf5f431e795b12c09cacb4a9941fc90eb4e050"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.010360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a60cd39-56e8-4e32-845e-1a931ade509b","Type":"ContainerStarted","Data":"19f4960595b00546a85aed98140db9fb16b800491a40403d52754941581e49e3"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.013163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ddbdcb65-zgzvw" event={"ID":"1ea96372-3600-4d56-8c12-3cb7aeca1fe4","Type":"ContainerStarted","Data":"9e0835122e7537367bdc84405937c706bc9be0ef3e5c3bb4e362d3456b7d2b3c"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.013212 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ddbdcb65-zgzvw" event={"ID":"1ea96372-3600-4d56-8c12-3cb7aeca1fe4","Type":"ContainerStarted","Data":"b08eaf50b5db63f5f16b455bbfe85af134aedc1912efa811197c32347267bf99"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.013354 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86ddbdcb65-zgzvw" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon-log" containerID="cri-o://b08eaf50b5db63f5f16b455bbfe85af134aedc1912efa811197c32347267bf99" gracePeriod=30 Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.013875 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86ddbdcb65-zgzvw" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon" containerID="cri-o://9e0835122e7537367bdc84405937c706bc9be0ef3e5c3bb4e362d3456b7d2b3c" gracePeriod=30 Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.022480 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676fb56878-mj2b7" event={"ID":"a461c706-cb38-4f48-bd86-1ebcbda38d60","Type":"ContainerStarted","Data":"913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.022530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676fb56878-mj2b7" event={"ID":"a461c706-cb38-4f48-bd86-1ebcbda38d60","Type":"ContainerStarted","Data":"130cb96bbc6c24007ce44038d93dfcb9ea898002054dba4d777f9898a5588b34"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.024392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c89555865-kchbz" event={"ID":"90684036-aca5-4f05-b226-39182028d509","Type":"ContainerStarted","Data":"32563b6b2b756cec5b4eaec7cfa19db3388f4f2010513dc7174091830b7dc67e"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.024438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c89555865-kchbz" event={"ID":"90684036-aca5-4f05-b226-39182028d509","Type":"ContainerStarted","Data":"8abce8e154b61dbcf14d91c8db181eb59361cb6e6b62687e72e094ffbd223186"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.024514 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c89555865-kchbz" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon-log" containerID="cri-o://8abce8e154b61dbcf14d91c8db181eb59361cb6e6b62687e72e094ffbd223186" gracePeriod=30 Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.024514 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c89555865-kchbz" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon" containerID="cri-o://32563b6b2b756cec5b4eaec7cfa19db3388f4f2010513dc7174091830b7dc67e" gracePeriod=30 Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.026940 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"871b2ddb043bf24b6fc601ecc25cc84c3454c68175ee8a91d681a6cd3aa935d5"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.034480 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6f87784b-7wwr6" event={"ID":"13a81014-1f64-436a-b3e0-1f8f79e3a7a0","Type":"ContainerStarted","Data":"91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.034527 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6f87784b-7wwr6" event={"ID":"13a81014-1f64-436a-b3e0-1f8f79e3a7a0","Type":"ContainerStarted","Data":"9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36"} Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.057627 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86ddbdcb65-zgzvw" podStartSLOduration=2.992557789 podStartE2EDuration="13.057599218s" podCreationTimestamp="2025-10-08 08:19:37 +0000 UTC" firstStartedPulling="2025-10-08 08:19:38.716328566 +0000 UTC m=+6321.846021167" lastFinishedPulling="2025-10-08 08:19:48.781369975 +0000 UTC m=+6331.911062596" observedRunningTime="2025-10-08 08:19:50.031621434 +0000 UTC m=+6333.161314035" watchObservedRunningTime="2025-10-08 08:19:50.057599218 +0000 UTC m=+6333.187291849" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.069592 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.069700 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.072040 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-676fb56878-mj2b7" podStartSLOduration=2.837119693 podStartE2EDuration="11.072015109s" podCreationTimestamp="2025-10-08 08:19:39 +0000 UTC" firstStartedPulling="2025-10-08 08:19:40.546488909 +0000 UTC m=+6323.676181500" lastFinishedPulling="2025-10-08 08:19:48.781384305 +0000 UTC m=+6331.911076916" observedRunningTime="2025-10-08 08:19:50.056025895 +0000 UTC m=+6333.185718506" watchObservedRunningTime="2025-10-08 08:19:50.072015109 +0000 UTC m=+6333.201707710" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.145117 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c89555865-kchbz" podStartSLOduration=3.230136688 podStartE2EDuration="13.145089339s" podCreationTimestamp="2025-10-08 08:19:37 +0000 UTC" firstStartedPulling="2025-10-08 08:19:38.859563638 +0000 UTC m=+6321.989256249" lastFinishedPulling="2025-10-08 08:19:48.774516289 +0000 UTC m=+6331.904208900" observedRunningTime="2025-10-08 08:19:50.103492552 +0000 UTC m=+6333.233185153" watchObservedRunningTime="2025-10-08 08:19:50.145089339 +0000 UTC m=+6333.274781940" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.173086 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b6f87784b-7wwr6" podStartSLOduration=3.111563449 podStartE2EDuration="11.173064857s" podCreationTimestamp="2025-10-08 08:19:39 +0000 UTC" firstStartedPulling="2025-10-08 08:19:40.717983106 +0000 UTC m=+6323.847675707" lastFinishedPulling="2025-10-08 08:19:48.779484504 +0000 UTC m=+6331.909177115" observedRunningTime="2025-10-08 08:19:50.124611604 +0000 UTC m=+6333.254304205" watchObservedRunningTime="2025-10-08 08:19:50.173064857 +0000 UTC m=+6333.302757458" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.250343 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:50 crc kubenswrapper[4958]: I1008 08:19:50.250961 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:19:51 crc kubenswrapper[4958]: I1008 08:19:51.050257 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6a60cd39-56e8-4e32-845e-1a931ade509b","Type":"ContainerStarted","Data":"2e727503b87806ce54c5caf356eadf1300b36c803e2116498eeab08423182138"} Oct 08 08:19:51 crc kubenswrapper[4958]: I1008 08:19:51.053711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1","Type":"ContainerStarted","Data":"1c09d3506ab7725357e541ccf113e08f7eefb3a6636020c1651b7a7c2745710f"} Oct 08 08:19:51 crc kubenswrapper[4958]: I1008 08:19:51.102224 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.102191963 podStartE2EDuration="9.102191963s" podCreationTimestamp="2025-10-08 08:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:19:51.077858724 +0000 UTC m=+6334.207551315" watchObservedRunningTime="2025-10-08 08:19:51.102191963 +0000 UTC m=+6334.231884564" Oct 08 08:19:51 crc kubenswrapper[4958]: I1008 08:19:51.109444 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.109426349 podStartE2EDuration="9.109426349s" podCreationTimestamp="2025-10-08 08:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:19:51.099026468 +0000 UTC m=+6334.228719069" watchObservedRunningTime="2025-10-08 08:19:51.109426349 +0000 UTC m=+6334.239118950" Oct 08 08:19:52 crc kubenswrapper[4958]: I1008 08:19:52.627836 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:52 crc kubenswrapper[4958]: I1008 08:19:52.629084 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:52 crc kubenswrapper[4958]: I1008 08:19:52.667658 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:52 crc kubenswrapper[4958]: I1008 08:19:52.677382 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:53 crc kubenswrapper[4958]: I1008 08:19:53.071199 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:53 crc kubenswrapper[4958]: I1008 08:19:53.071688 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:53 crc kubenswrapper[4958]: I1008 08:19:53.334817 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 08:19:53 crc kubenswrapper[4958]: I1008 08:19:53.334861 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 08 08:19:53 crc kubenswrapper[4958]: I1008 08:19:53.391051 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 08:19:53 crc kubenswrapper[4958]: I1008 08:19:53.404711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 08 08:19:54 crc kubenswrapper[4958]: I1008 08:19:54.086940 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 08:19:54 crc kubenswrapper[4958]: I1008 08:19:54.087317 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 08 08:19:56 crc kubenswrapper[4958]: I1008 08:19:56.114800 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 08 08:19:56 crc kubenswrapper[4958]: I1008 08:19:56.143603 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 08:19:56 crc kubenswrapper[4958]: I1008 08:19:56.148960 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 08 08:19:56 crc kubenswrapper[4958]: I1008 08:19:56.222033 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:56 crc kubenswrapper[4958]: I1008 08:19:56.232649 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 08 08:19:58 crc kubenswrapper[4958]: I1008 08:19:58.107470 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:19:58 crc kubenswrapper[4958]: I1008 08:19:58.228041 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:20:00 crc kubenswrapper[4958]: I1008 08:20:00.071504 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-676fb56878-mj2b7" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Oct 08 08:20:00 crc kubenswrapper[4958]: I1008 08:20:00.250167 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b6f87784b-7wwr6" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.614727 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cnfhd"] Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.631777 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnfhd"] Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.631901 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.705540 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-utilities\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.705865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-catalog-content\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.705925 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzlct\" (UniqueName: \"kubernetes.io/projected/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-kube-api-access-kzlct\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.807748 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-utilities\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.807866 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-catalog-content\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.807937 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzlct\" (UniqueName: \"kubernetes.io/projected/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-kube-api-access-kzlct\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.808244 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-utilities\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.808478 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-catalog-content\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.831628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzlct\" (UniqueName: \"kubernetes.io/projected/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-kube-api-access-kzlct\") pod \"redhat-operators-cnfhd\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:01 crc kubenswrapper[4958]: I1008 08:20:01.961007 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:02 crc kubenswrapper[4958]: I1008 08:20:02.475748 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnfhd"] Oct 08 08:20:03 crc kubenswrapper[4958]: I1008 08:20:03.190578 4958 generic.go:334] "Generic (PLEG): container finished" podID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerID="adf2542d2315cdcfa6c81dd90d20ac3384ba0823058d5654a578f384afcc5143" exitCode=0 Oct 08 08:20:03 crc kubenswrapper[4958]: I1008 08:20:03.190779 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerDied","Data":"adf2542d2315cdcfa6c81dd90d20ac3384ba0823058d5654a578f384afcc5143"} Oct 08 08:20:03 crc kubenswrapper[4958]: I1008 08:20:03.190847 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerStarted","Data":"6ee5da5c0b47dddad9f64dce5fd1cb41d459ac530574f4eadf8780ff4517e2cb"} Oct 08 08:20:05 crc kubenswrapper[4958]: I1008 08:20:05.247660 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerStarted","Data":"d2a883a8e5ac7f1c87d90f3e4ff4d3b6212715c255037c2c718e7c2707434a78"} Oct 08 08:20:06 crc kubenswrapper[4958]: I1008 08:20:06.261838 4958 generic.go:334] "Generic (PLEG): container finished" podID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerID="d2a883a8e5ac7f1c87d90f3e4ff4d3b6212715c255037c2c718e7c2707434a78" exitCode=0 Oct 08 08:20:06 crc kubenswrapper[4958]: I1008 08:20:06.261889 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerDied","Data":"d2a883a8e5ac7f1c87d90f3e4ff4d3b6212715c255037c2c718e7c2707434a78"} Oct 08 08:20:07 crc kubenswrapper[4958]: I1008 08:20:07.289242 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerStarted","Data":"f81842d6f59e71aebedcf8e7b0c143ca598ee441d414849255f53e2d8d287d2e"} Oct 08 08:20:07 crc kubenswrapper[4958]: I1008 08:20:07.316688 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cnfhd" podStartSLOduration=2.65736092 podStartE2EDuration="6.316656839s" podCreationTimestamp="2025-10-08 08:20:01 +0000 UTC" firstStartedPulling="2025-10-08 08:20:03.1923824 +0000 UTC m=+6346.322075001" lastFinishedPulling="2025-10-08 08:20:06.851678319 +0000 UTC m=+6349.981370920" observedRunningTime="2025-10-08 08:20:07.307589173 +0000 UTC m=+6350.437281774" watchObservedRunningTime="2025-10-08 08:20:07.316656839 +0000 UTC m=+6350.446349440" Oct 08 08:20:11 crc kubenswrapper[4958]: I1008 08:20:11.783163 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:20:11 crc kubenswrapper[4958]: I1008 08:20:11.962063 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:11 crc kubenswrapper[4958]: I1008 08:20:11.962130 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:12 crc kubenswrapper[4958]: I1008 08:20:12.045725 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:20:13 crc kubenswrapper[4958]: I1008 08:20:13.039523 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnfhd" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="registry-server" probeResult="failure" output=< Oct 08 08:20:13 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:20:13 crc kubenswrapper[4958]: > Oct 08 08:20:13 crc kubenswrapper[4958]: I1008 08:20:13.573121 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:20:13 crc kubenswrapper[4958]: I1008 08:20:13.658401 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:20:13 crc kubenswrapper[4958]: I1008 08:20:13.672858 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-676fb56878-mj2b7"] Oct 08 08:20:14 crc kubenswrapper[4958]: I1008 08:20:14.377619 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-676fb56878-mj2b7" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon-log" containerID="cri-o://130cb96bbc6c24007ce44038d93dfcb9ea898002054dba4d777f9898a5588b34" gracePeriod=30 Oct 08 08:20:14 crc kubenswrapper[4958]: I1008 08:20:14.378014 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-676fb56878-mj2b7" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" containerID="cri-o://913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d" gracePeriod=30 Oct 08 08:20:17 crc kubenswrapper[4958]: E1008 08:20:17.756208 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda461c706_cb38_4f48_bd86_1ebcbda38d60.slice/crio-913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda461c706_cb38_4f48_bd86_1ebcbda38d60.slice/crio-conmon-913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d.scope\": RecentStats: unable to find data in memory cache]" Oct 08 08:20:18 crc kubenswrapper[4958]: I1008 08:20:18.425256 4958 generic.go:334] "Generic (PLEG): container finished" podID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerID="913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d" exitCode=0 Oct 08 08:20:18 crc kubenswrapper[4958]: I1008 08:20:18.425315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676fb56878-mj2b7" event={"ID":"a461c706-cb38-4f48-bd86-1ebcbda38d60","Type":"ContainerDied","Data":"913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d"} Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.060466 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2tvl7"] Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.071363 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-676fb56878-mj2b7" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.085494 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2tvl7"] Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.455136 4958 generic.go:334] "Generic (PLEG): container finished" podID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerID="9e0835122e7537367bdc84405937c706bc9be0ef3e5c3bb4e362d3456b7d2b3c" exitCode=137 Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.455164 4958 generic.go:334] "Generic (PLEG): container finished" podID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerID="b08eaf50b5db63f5f16b455bbfe85af134aedc1912efa811197c32347267bf99" exitCode=137 Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.455205 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ddbdcb65-zgzvw" event={"ID":"1ea96372-3600-4d56-8c12-3cb7aeca1fe4","Type":"ContainerDied","Data":"9e0835122e7537367bdc84405937c706bc9be0ef3e5c3bb4e362d3456b7d2b3c"} Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.455230 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ddbdcb65-zgzvw" event={"ID":"1ea96372-3600-4d56-8c12-3cb7aeca1fe4","Type":"ContainerDied","Data":"b08eaf50b5db63f5f16b455bbfe85af134aedc1912efa811197c32347267bf99"} Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.458407 4958 generic.go:334] "Generic (PLEG): container finished" podID="90684036-aca5-4f05-b226-39182028d509" containerID="32563b6b2b756cec5b4eaec7cfa19db3388f4f2010513dc7174091830b7dc67e" exitCode=137 Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.458700 4958 generic.go:334] "Generic (PLEG): container finished" podID="90684036-aca5-4f05-b226-39182028d509" containerID="8abce8e154b61dbcf14d91c8db181eb59361cb6e6b62687e72e094ffbd223186" exitCode=137 Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.458473 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c89555865-kchbz" event={"ID":"90684036-aca5-4f05-b226-39182028d509","Type":"ContainerDied","Data":"32563b6b2b756cec5b4eaec7cfa19db3388f4f2010513dc7174091830b7dc67e"} Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.458740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c89555865-kchbz" event={"ID":"90684036-aca5-4f05-b226-39182028d509","Type":"ContainerDied","Data":"8abce8e154b61dbcf14d91c8db181eb59361cb6e6b62687e72e094ffbd223186"} Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.491245 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.558480 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590026 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-logs\") pod \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590214 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-config-data\") pod \"90684036-aca5-4f05-b226-39182028d509\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-horizon-secret-key\") pod \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590451 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b7fl\" (UniqueName: \"kubernetes.io/projected/90684036-aca5-4f05-b226-39182028d509-kube-api-access-6b7fl\") pod \"90684036-aca5-4f05-b226-39182028d509\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590561 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90684036-aca5-4f05-b226-39182028d509-logs\") pod \"90684036-aca5-4f05-b226-39182028d509\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590598 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-scripts\") pod \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590637 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-logs" (OuterVolumeSpecName: "logs") pod "1ea96372-3600-4d56-8c12-3cb7aeca1fe4" (UID: "1ea96372-3600-4d56-8c12-3cb7aeca1fe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590683 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90684036-aca5-4f05-b226-39182028d509-horizon-secret-key\") pod \"90684036-aca5-4f05-b226-39182028d509\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590753 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-scripts\") pod \"90684036-aca5-4f05-b226-39182028d509\" (UID: \"90684036-aca5-4f05-b226-39182028d509\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590839 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kmlm\" (UniqueName: \"kubernetes.io/projected/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-kube-api-access-5kmlm\") pod \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.590876 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-config-data\") pod \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\" (UID: \"1ea96372-3600-4d56-8c12-3cb7aeca1fe4\") " Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.591078 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90684036-aca5-4f05-b226-39182028d509-logs" (OuterVolumeSpecName: "logs") pod "90684036-aca5-4f05-b226-39182028d509" (UID: "90684036-aca5-4f05-b226-39182028d509"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.593410 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.593444 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90684036-aca5-4f05-b226-39182028d509-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.597317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1ea96372-3600-4d56-8c12-3cb7aeca1fe4" (UID: "1ea96372-3600-4d56-8c12-3cb7aeca1fe4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.597422 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90684036-aca5-4f05-b226-39182028d509-kube-api-access-6b7fl" (OuterVolumeSpecName: "kube-api-access-6b7fl") pod "90684036-aca5-4f05-b226-39182028d509" (UID: "90684036-aca5-4f05-b226-39182028d509"). InnerVolumeSpecName "kube-api-access-6b7fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.597820 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-kube-api-access-5kmlm" (OuterVolumeSpecName: "kube-api-access-5kmlm") pod "1ea96372-3600-4d56-8c12-3cb7aeca1fe4" (UID: "1ea96372-3600-4d56-8c12-3cb7aeca1fe4"). InnerVolumeSpecName "kube-api-access-5kmlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.597875 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90684036-aca5-4f05-b226-39182028d509-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "90684036-aca5-4f05-b226-39182028d509" (UID: "90684036-aca5-4f05-b226-39182028d509"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.618605 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-scripts" (OuterVolumeSpecName: "scripts") pod "1ea96372-3600-4d56-8c12-3cb7aeca1fe4" (UID: "1ea96372-3600-4d56-8c12-3cb7aeca1fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.622037 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-config-data" (OuterVolumeSpecName: "config-data") pod "90684036-aca5-4f05-b226-39182028d509" (UID: "90684036-aca5-4f05-b226-39182028d509"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.632847 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-scripts" (OuterVolumeSpecName: "scripts") pod "90684036-aca5-4f05-b226-39182028d509" (UID: "90684036-aca5-4f05-b226-39182028d509"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.639927 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-config-data" (OuterVolumeSpecName: "config-data") pod "1ea96372-3600-4d56-8c12-3cb7aeca1fe4" (UID: "1ea96372-3600-4d56-8c12-3cb7aeca1fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695635 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695678 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90684036-aca5-4f05-b226-39182028d509-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695695 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695708 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kmlm\" (UniqueName: \"kubernetes.io/projected/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-kube-api-access-5kmlm\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695720 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695732 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90684036-aca5-4f05-b226-39182028d509-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695743 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ea96372-3600-4d56-8c12-3cb7aeca1fe4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:20 crc kubenswrapper[4958]: I1008 08:20:20.695755 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b7fl\" (UniqueName: \"kubernetes.io/projected/90684036-aca5-4f05-b226-39182028d509-kube-api-access-6b7fl\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.469160 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c89555865-kchbz" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.469335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c89555865-kchbz" event={"ID":"90684036-aca5-4f05-b226-39182028d509","Type":"ContainerDied","Data":"eecf1372d605c34b02a10bee2248d37b86007f7593ca5c8ed015457b375cd914"} Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.469635 4958 scope.go:117] "RemoveContainer" containerID="32563b6b2b756cec5b4eaec7cfa19db3388f4f2010513dc7174091830b7dc67e" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.472258 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ddbdcb65-zgzvw" event={"ID":"1ea96372-3600-4d56-8c12-3cb7aeca1fe4","Type":"ContainerDied","Data":"95273b18713118d638330c0f0fbe39cfd475e14813907cb903e0b1a82bc9eb97"} Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.472346 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ddbdcb65-zgzvw" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.505625 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c89555865-kchbz"] Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.518026 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c89555865-kchbz"] Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.526262 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86ddbdcb65-zgzvw"] Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.532389 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86ddbdcb65-zgzvw"] Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.593316 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021f0ece-2f50-46bc-8380-de8e5d1e1c03" path="/var/lib/kubelet/pods/021f0ece-2f50-46bc-8380-de8e5d1e1c03/volumes" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.607088 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" path="/var/lib/kubelet/pods/1ea96372-3600-4d56-8c12-3cb7aeca1fe4/volumes" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.608366 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90684036-aca5-4f05-b226-39182028d509" path="/var/lib/kubelet/pods/90684036-aca5-4f05-b226-39182028d509/volumes" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.665280 4958 scope.go:117] "RemoveContainer" containerID="8abce8e154b61dbcf14d91c8db181eb59361cb6e6b62687e72e094ffbd223186" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.695080 4958 scope.go:117] "RemoveContainer" containerID="9e0835122e7537367bdc84405937c706bc9be0ef3e5c3bb4e362d3456b7d2b3c" Oct 08 08:20:21 crc kubenswrapper[4958]: I1008 08:20:21.886964 4958 scope.go:117] "RemoveContainer" containerID="b08eaf50b5db63f5f16b455bbfe85af134aedc1912efa811197c32347267bf99" Oct 08 08:20:22 crc kubenswrapper[4958]: I1008 08:20:22.032452 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:22 crc kubenswrapper[4958]: I1008 08:20:22.088922 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:22 crc kubenswrapper[4958]: I1008 08:20:22.284868 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnfhd"] Oct 08 08:20:23 crc kubenswrapper[4958]: I1008 08:20:23.502427 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cnfhd" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="registry-server" containerID="cri-o://f81842d6f59e71aebedcf8e7b0c143ca598ee441d414849255f53e2d8d287d2e" gracePeriod=2 Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.515279 4958 generic.go:334] "Generic (PLEG): container finished" podID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerID="f81842d6f59e71aebedcf8e7b0c143ca598ee441d414849255f53e2d8d287d2e" exitCode=0 Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.515358 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerDied","Data":"f81842d6f59e71aebedcf8e7b0c143ca598ee441d414849255f53e2d8d287d2e"} Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.515566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnfhd" event={"ID":"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3","Type":"ContainerDied","Data":"6ee5da5c0b47dddad9f64dce5fd1cb41d459ac530574f4eadf8780ff4517e2cb"} Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.515579 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee5da5c0b47dddad9f64dce5fd1cb41d459ac530574f4eadf8780ff4517e2cb" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.572697 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.579676 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-utilities\") pod \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.579751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-catalog-content\") pod \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.579933 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzlct\" (UniqueName: \"kubernetes.io/projected/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-kube-api-access-kzlct\") pod \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\" (UID: \"e49df3a6-16d0-4a36-b0e2-5db7e388b3a3\") " Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.580886 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-utilities" (OuterVolumeSpecName: "utilities") pod "e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" (UID: "e49df3a6-16d0-4a36-b0e2-5db7e388b3a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.587176 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-kube-api-access-kzlct" (OuterVolumeSpecName: "kube-api-access-kzlct") pod "e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" (UID: "e49df3a6-16d0-4a36-b0e2-5db7e388b3a3"). InnerVolumeSpecName "kube-api-access-kzlct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.671722 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" (UID: "e49df3a6-16d0-4a36-b0e2-5db7e388b3a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.682169 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzlct\" (UniqueName: \"kubernetes.io/projected/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-kube-api-access-kzlct\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.682207 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:24 crc kubenswrapper[4958]: I1008 08:20:24.682221 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:25 crc kubenswrapper[4958]: I1008 08:20:25.530008 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnfhd" Oct 08 08:20:25 crc kubenswrapper[4958]: I1008 08:20:25.603070 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnfhd"] Oct 08 08:20:25 crc kubenswrapper[4958]: I1008 08:20:25.603215 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cnfhd"] Oct 08 08:20:27 crc kubenswrapper[4958]: I1008 08:20:27.593550 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" path="/var/lib/kubelet/pods/e49df3a6-16d0-4a36-b0e2-5db7e388b3a3/volumes" Oct 08 08:20:30 crc kubenswrapper[4958]: I1008 08:20:30.034135 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ec34-account-create-dkrmc"] Oct 08 08:20:30 crc kubenswrapper[4958]: I1008 08:20:30.046505 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ec34-account-create-dkrmc"] Oct 08 08:20:30 crc kubenswrapper[4958]: I1008 08:20:30.070753 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-676fb56878-mj2b7" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Oct 08 08:20:31 crc kubenswrapper[4958]: I1008 08:20:31.603687 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea17bac7-d2bd-4285-a9d2-184c1eeec4ad" path="/var/lib/kubelet/pods/ea17bac7-d2bd-4285-a9d2-184c1eeec4ad/volumes" Oct 08 08:20:38 crc kubenswrapper[4958]: I1008 08:20:38.059213 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9x5c7"] Oct 08 08:20:38 crc kubenswrapper[4958]: I1008 08:20:38.075918 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9x5c7"] Oct 08 08:20:39 crc kubenswrapper[4958]: I1008 08:20:39.600336 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61952772-bb5b-4065-b0d6-ad86d8d246d5" path="/var/lib/kubelet/pods/61952772-bb5b-4065-b0d6-ad86d8d246d5/volumes" Oct 08 08:20:40 crc kubenswrapper[4958]: I1008 08:20:40.072686 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-676fb56878-mj2b7" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.124:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.124:8443: connect: connection refused" Oct 08 08:20:40 crc kubenswrapper[4958]: I1008 08:20:40.072935 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.026136 4958 scope.go:117] "RemoveContainer" containerID="e7f2418f0d6d62c4fe1ffbd940f669ee2247c3ef374845dde4c44fd4ed60cdc0" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.098478 4958 scope.go:117] "RemoveContainer" containerID="1b92dbe901bdf178227d7d957751eca42ca7d34f7d41b372b825d297fdc4f642" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.155877 4958 scope.go:117] "RemoveContainer" containerID="a0299c96d4d3d7747c379addefd3be950e3dec7630b729995d1db3f1c1377e63" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.195375 4958 scope.go:117] "RemoveContainer" containerID="599b21c9fdedab1878ccfb52e2ac32b63753d639c292c044288c7d5e3da7950d" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.241315 4958 scope.go:117] "RemoveContainer" containerID="f014ab82cf1c4c39feb07937d5167a4ef7aad791e15e6eb25b77ac02dd35a182" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.290485 4958 scope.go:117] "RemoveContainer" containerID="65de011391af5deab285e20798bbe94d8c8f3cdb670d3735f1b866a47fe92b54" Oct 08 08:20:41 crc kubenswrapper[4958]: I1008 08:20:41.316273 4958 scope.go:117] "RemoveContainer" containerID="42c225a5c21d49e6294da657e4fa3af7063ab3aa7c6b88a018c8e5b46271eb6a" Oct 08 08:20:44 crc kubenswrapper[4958]: I1008 08:20:44.815896 4958 generic.go:334] "Generic (PLEG): container finished" podID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerID="130cb96bbc6c24007ce44038d93dfcb9ea898002054dba4d777f9898a5588b34" exitCode=137 Oct 08 08:20:44 crc kubenswrapper[4958]: I1008 08:20:44.815996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676fb56878-mj2b7" event={"ID":"a461c706-cb38-4f48-bd86-1ebcbda38d60","Type":"ContainerDied","Data":"130cb96bbc6c24007ce44038d93dfcb9ea898002054dba4d777f9898a5588b34"} Oct 08 08:20:44 crc kubenswrapper[4958]: I1008 08:20:44.816589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676fb56878-mj2b7" event={"ID":"a461c706-cb38-4f48-bd86-1ebcbda38d60","Type":"ContainerDied","Data":"581fb2a9a9a2395d5cc7ef2f4aff5d45f3fce47d09949854e0865bc012c3c7b5"} Oct 08 08:20:44 crc kubenswrapper[4958]: I1008 08:20:44.816611 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="581fb2a9a9a2395d5cc7ef2f4aff5d45f3fce47d09949854e0865bc012c3c7b5" Oct 08 08:20:44 crc kubenswrapper[4958]: I1008 08:20:44.851974 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-tls-certs\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038529 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-combined-ca-bundle\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038566 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-secret-key\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038609 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-config-data\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038650 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a461c706-cb38-4f48-bd86-1ebcbda38d60-logs\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038713 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-scripts\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.038802 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7fhr\" (UniqueName: \"kubernetes.io/projected/a461c706-cb38-4f48-bd86-1ebcbda38d60-kube-api-access-v7fhr\") pod \"a461c706-cb38-4f48-bd86-1ebcbda38d60\" (UID: \"a461c706-cb38-4f48-bd86-1ebcbda38d60\") " Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.039223 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a461c706-cb38-4f48-bd86-1ebcbda38d60-logs" (OuterVolumeSpecName: "logs") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.049654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a461c706-cb38-4f48-bd86-1ebcbda38d60-kube-api-access-v7fhr" (OuterVolumeSpecName: "kube-api-access-v7fhr") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "kube-api-access-v7fhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.060189 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.075456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.077227 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-scripts" (OuterVolumeSpecName: "scripts") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.101077 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-config-data" (OuterVolumeSpecName: "config-data") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.115850 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "a461c706-cb38-4f48-bd86-1ebcbda38d60" (UID: "a461c706-cb38-4f48-bd86-1ebcbda38d60"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141384 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141424 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141438 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a461c706-cb38-4f48-bd86-1ebcbda38d60-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141450 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141466 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a461c706-cb38-4f48-bd86-1ebcbda38d60-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141478 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a461c706-cb38-4f48-bd86-1ebcbda38d60-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.141492 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7fhr\" (UniqueName: \"kubernetes.io/projected/a461c706-cb38-4f48-bd86-1ebcbda38d60-kube-api-access-v7fhr\") on node \"crc\" DevicePath \"\"" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.831999 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676fb56878-mj2b7" Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.870780 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-676fb56878-mj2b7"] Oct 08 08:20:45 crc kubenswrapper[4958]: I1008 08:20:45.878513 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-676fb56878-mj2b7"] Oct 08 08:20:47 crc kubenswrapper[4958]: I1008 08:20:47.601030 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" path="/var/lib/kubelet/pods/a461c706-cb38-4f48-bd86-1ebcbda38d60/volumes" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.715626 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5db9c89c9d-78bjw"] Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716727 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="registry-server" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716744 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="registry-server" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716775 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="extract-utilities" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716784 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="extract-utilities" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716800 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716808 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716817 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716825 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716848 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716855 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716871 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716877 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716897 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716905 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716918 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.716924 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: E1008 08:21:19.716939 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="extract-content" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717189 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="extract-content" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717410 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717423 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717441 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717452 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49df3a6-16d0-4a36-b0e2-5db7e388b3a3" containerName="registry-server" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717472 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea96372-3600-4d56-8c12-3cb7aeca1fe4" containerName="horizon" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717490 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90684036-aca5-4f05-b226-39182028d509" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.717504 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a461c706-cb38-4f48-bd86-1ebcbda38d60" containerName="horizon-log" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.718837 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.731177 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5db9c89c9d-78bjw"] Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.870865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-scripts\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.870938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-horizon-secret-key\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.870969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-logs\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.870995 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrq8m\" (UniqueName: \"kubernetes.io/projected/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-kube-api-access-lrq8m\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.871022 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-horizon-tls-certs\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.871258 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-config-data\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.871426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-combined-ca-bundle\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-combined-ca-bundle\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-scripts\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973395 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-horizon-secret-key\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973418 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-logs\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973447 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrq8m\" (UniqueName: \"kubernetes.io/projected/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-kube-api-access-lrq8m\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-horizon-tls-certs\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.973548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-config-data\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.974302 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-scripts\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.974378 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-logs\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.975264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-config-data\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.979472 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-horizon-tls-certs\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.980530 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-horizon-secret-key\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.981825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-combined-ca-bundle\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:19 crc kubenswrapper[4958]: I1008 08:21:19.990420 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrq8m\" (UniqueName: \"kubernetes.io/projected/9be8fb64-fc3c-4059-8e03-7a0b58cb30d4-kube-api-access-lrq8m\") pod \"horizon-5db9c89c9d-78bjw\" (UID: \"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4\") " pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:20 crc kubenswrapper[4958]: I1008 08:21:20.045327 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:20 crc kubenswrapper[4958]: I1008 08:21:20.533142 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5db9c89c9d-78bjw"] Oct 08 08:21:20 crc kubenswrapper[4958]: I1008 08:21:20.891053 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-6g59r"] Oct 08 08:21:20 crc kubenswrapper[4958]: I1008 08:21:20.892470 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6g59r" Oct 08 08:21:20 crc kubenswrapper[4958]: I1008 08:21:20.909807 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6g59r"] Oct 08 08:21:20 crc kubenswrapper[4958]: I1008 08:21:20.996198 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbf2\" (UniqueName: \"kubernetes.io/projected/4b801665-c4dc-4c61-8223-da6d1f82885d-kube-api-access-nvbf2\") pod \"heat-db-create-6g59r\" (UID: \"4b801665-c4dc-4c61-8223-da6d1f82885d\") " pod="openstack/heat-db-create-6g59r" Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.098051 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbf2\" (UniqueName: \"kubernetes.io/projected/4b801665-c4dc-4c61-8223-da6d1f82885d-kube-api-access-nvbf2\") pod \"heat-db-create-6g59r\" (UID: \"4b801665-c4dc-4c61-8223-da6d1f82885d\") " pod="openstack/heat-db-create-6g59r" Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.114227 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbf2\" (UniqueName: \"kubernetes.io/projected/4b801665-c4dc-4c61-8223-da6d1f82885d-kube-api-access-nvbf2\") pod \"heat-db-create-6g59r\" (UID: \"4b801665-c4dc-4c61-8223-da6d1f82885d\") " pod="openstack/heat-db-create-6g59r" Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.213458 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6g59r" Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.292522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db9c89c9d-78bjw" event={"ID":"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4","Type":"ContainerStarted","Data":"25735e1d9b0dfe56fde69fb10de31ca74e860735a0a80bc5c8938448f33364b0"} Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.292856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db9c89c9d-78bjw" event={"ID":"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4","Type":"ContainerStarted","Data":"cf80a2f316fd1ffb1b3ccf6d39277de22dcb42cd91ca028f753ed05334c0cf76"} Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.292868 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5db9c89c9d-78bjw" event={"ID":"9be8fb64-fc3c-4059-8e03-7a0b58cb30d4","Type":"ContainerStarted","Data":"5ba16b882117eb6ba7e307206350167ec481584b590e7342176fbef4a54d4189"} Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.316760 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5db9c89c9d-78bjw" podStartSLOduration=2.316742072 podStartE2EDuration="2.316742072s" podCreationTimestamp="2025-10-08 08:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:21:21.309374932 +0000 UTC m=+6424.439067543" watchObservedRunningTime="2025-10-08 08:21:21.316742072 +0000 UTC m=+6424.446434673" Oct 08 08:21:21 crc kubenswrapper[4958]: I1008 08:21:21.677989 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-6g59r"] Oct 08 08:21:21 crc kubenswrapper[4958]: W1008 08:21:21.694736 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b801665_c4dc_4c61_8223_da6d1f82885d.slice/crio-c31c986e4408205fe7543ec8e98d3004dcccf8c33fb3870218f8477ad9901165 WatchSource:0}: Error finding container c31c986e4408205fe7543ec8e98d3004dcccf8c33fb3870218f8477ad9901165: Status 404 returned error can't find the container with id c31c986e4408205fe7543ec8e98d3004dcccf8c33fb3870218f8477ad9901165 Oct 08 08:21:22 crc kubenswrapper[4958]: I1008 08:21:22.306793 4958 generic.go:334] "Generic (PLEG): container finished" podID="4b801665-c4dc-4c61-8223-da6d1f82885d" containerID="070c65c827915557ff4cabe953ccaf0aaebd19cc15ed8e920bce37cb6c91105c" exitCode=0 Oct 08 08:21:22 crc kubenswrapper[4958]: I1008 08:21:22.307305 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6g59r" event={"ID":"4b801665-c4dc-4c61-8223-da6d1f82885d","Type":"ContainerDied","Data":"070c65c827915557ff4cabe953ccaf0aaebd19cc15ed8e920bce37cb6c91105c"} Oct 08 08:21:22 crc kubenswrapper[4958]: I1008 08:21:22.307360 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6g59r" event={"ID":"4b801665-c4dc-4c61-8223-da6d1f82885d","Type":"ContainerStarted","Data":"c31c986e4408205fe7543ec8e98d3004dcccf8c33fb3870218f8477ad9901165"} Oct 08 08:21:23 crc kubenswrapper[4958]: I1008 08:21:23.724491 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6g59r" Oct 08 08:21:23 crc kubenswrapper[4958]: I1008 08:21:23.860058 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvbf2\" (UniqueName: \"kubernetes.io/projected/4b801665-c4dc-4c61-8223-da6d1f82885d-kube-api-access-nvbf2\") pod \"4b801665-c4dc-4c61-8223-da6d1f82885d\" (UID: \"4b801665-c4dc-4c61-8223-da6d1f82885d\") " Oct 08 08:21:23 crc kubenswrapper[4958]: I1008 08:21:23.865927 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b801665-c4dc-4c61-8223-da6d1f82885d-kube-api-access-nvbf2" (OuterVolumeSpecName: "kube-api-access-nvbf2") pod "4b801665-c4dc-4c61-8223-da6d1f82885d" (UID: "4b801665-c4dc-4c61-8223-da6d1f82885d"). InnerVolumeSpecName "kube-api-access-nvbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:21:23 crc kubenswrapper[4958]: I1008 08:21:23.963359 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvbf2\" (UniqueName: \"kubernetes.io/projected/4b801665-c4dc-4c61-8223-da6d1f82885d-kube-api-access-nvbf2\") on node \"crc\" DevicePath \"\"" Oct 08 08:21:24 crc kubenswrapper[4958]: I1008 08:21:24.330092 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-6g59r" event={"ID":"4b801665-c4dc-4c61-8223-da6d1f82885d","Type":"ContainerDied","Data":"c31c986e4408205fe7543ec8e98d3004dcccf8c33fb3870218f8477ad9901165"} Oct 08 08:21:24 crc kubenswrapper[4958]: I1008 08:21:24.330154 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31c986e4408205fe7543ec8e98d3004dcccf8c33fb3870218f8477ad9901165" Oct 08 08:21:24 crc kubenswrapper[4958]: I1008 08:21:24.330160 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-6g59r" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.045969 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.046650 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.048912 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5db9c89c9d-78bjw" podUID="9be8fb64-fc3c-4059-8e03-7a0b58cb30d4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.129:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.129:8443: connect: connection refused" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.938150 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-7e9b-account-create-6mx4r"] Oct 08 08:21:30 crc kubenswrapper[4958]: E1008 08:21:30.940311 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b801665-c4dc-4c61-8223-da6d1f82885d" containerName="mariadb-database-create" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.940672 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b801665-c4dc-4c61-8223-da6d1f82885d" containerName="mariadb-database-create" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.941646 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b801665-c4dc-4c61-8223-da6d1f82885d" containerName="mariadb-database-create" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.945379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.947380 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Oct 08 08:21:30 crc kubenswrapper[4958]: I1008 08:21:30.965665 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-7e9b-account-create-6mx4r"] Oct 08 08:21:31 crc kubenswrapper[4958]: I1008 08:21:31.044632 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59pk\" (UniqueName: \"kubernetes.io/projected/3102efb5-c30c-441d-bad2-0822dbedad36-kube-api-access-d59pk\") pod \"heat-7e9b-account-create-6mx4r\" (UID: \"3102efb5-c30c-441d-bad2-0822dbedad36\") " pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:31 crc kubenswrapper[4958]: I1008 08:21:31.146968 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59pk\" (UniqueName: \"kubernetes.io/projected/3102efb5-c30c-441d-bad2-0822dbedad36-kube-api-access-d59pk\") pod \"heat-7e9b-account-create-6mx4r\" (UID: \"3102efb5-c30c-441d-bad2-0822dbedad36\") " pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:31 crc kubenswrapper[4958]: I1008 08:21:31.165192 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59pk\" (UniqueName: \"kubernetes.io/projected/3102efb5-c30c-441d-bad2-0822dbedad36-kube-api-access-d59pk\") pod \"heat-7e9b-account-create-6mx4r\" (UID: \"3102efb5-c30c-441d-bad2-0822dbedad36\") " pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:31 crc kubenswrapper[4958]: I1008 08:21:31.286962 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:31 crc kubenswrapper[4958]: I1008 08:21:31.777717 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-7e9b-account-create-6mx4r"] Oct 08 08:21:32 crc kubenswrapper[4958]: I1008 08:21:32.468105 4958 generic.go:334] "Generic (PLEG): container finished" podID="3102efb5-c30c-441d-bad2-0822dbedad36" containerID="d4f132119f42015ad30417916fa9bcfa4961b4c66e9e312785837c3bfa3fa46f" exitCode=0 Oct 08 08:21:32 crc kubenswrapper[4958]: I1008 08:21:32.468238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7e9b-account-create-6mx4r" event={"ID":"3102efb5-c30c-441d-bad2-0822dbedad36","Type":"ContainerDied","Data":"d4f132119f42015ad30417916fa9bcfa4961b4c66e9e312785837c3bfa3fa46f"} Oct 08 08:21:32 crc kubenswrapper[4958]: I1008 08:21:32.468555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7e9b-account-create-6mx4r" event={"ID":"3102efb5-c30c-441d-bad2-0822dbedad36","Type":"ContainerStarted","Data":"a965f8b0520da1e08a4fc5502bc00fdd1a8eda4b896d27bbbee18ac8a6be8d68"} Oct 08 08:21:33 crc kubenswrapper[4958]: I1008 08:21:33.895809 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:34 crc kubenswrapper[4958]: I1008 08:21:34.005301 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d59pk\" (UniqueName: \"kubernetes.io/projected/3102efb5-c30c-441d-bad2-0822dbedad36-kube-api-access-d59pk\") pod \"3102efb5-c30c-441d-bad2-0822dbedad36\" (UID: \"3102efb5-c30c-441d-bad2-0822dbedad36\") " Oct 08 08:21:34 crc kubenswrapper[4958]: I1008 08:21:34.013739 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3102efb5-c30c-441d-bad2-0822dbedad36-kube-api-access-d59pk" (OuterVolumeSpecName: "kube-api-access-d59pk") pod "3102efb5-c30c-441d-bad2-0822dbedad36" (UID: "3102efb5-c30c-441d-bad2-0822dbedad36"). InnerVolumeSpecName "kube-api-access-d59pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:21:34 crc kubenswrapper[4958]: I1008 08:21:34.108203 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d59pk\" (UniqueName: \"kubernetes.io/projected/3102efb5-c30c-441d-bad2-0822dbedad36-kube-api-access-d59pk\") on node \"crc\" DevicePath \"\"" Oct 08 08:21:34 crc kubenswrapper[4958]: I1008 08:21:34.492401 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7e9b-account-create-6mx4r" event={"ID":"3102efb5-c30c-441d-bad2-0822dbedad36","Type":"ContainerDied","Data":"a965f8b0520da1e08a4fc5502bc00fdd1a8eda4b896d27bbbee18ac8a6be8d68"} Oct 08 08:21:34 crc kubenswrapper[4958]: I1008 08:21:34.492450 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a965f8b0520da1e08a4fc5502bc00fdd1a8eda4b896d27bbbee18ac8a6be8d68" Oct 08 08:21:34 crc kubenswrapper[4958]: I1008 08:21:34.492919 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7e9b-account-create-6mx4r" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.008091 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-t4nvc"] Oct 08 08:21:36 crc kubenswrapper[4958]: E1008 08:21:36.009111 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3102efb5-c30c-441d-bad2-0822dbedad36" containerName="mariadb-account-create" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.009149 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3102efb5-c30c-441d-bad2-0822dbedad36" containerName="mariadb-account-create" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.009726 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3102efb5-c30c-441d-bad2-0822dbedad36" containerName="mariadb-account-create" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.010746 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.016555 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.016808 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nd76j" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.034179 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-t4nvc"] Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.163520 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-combined-ca-bundle\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.163597 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-config-data\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.163740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lglhg\" (UniqueName: \"kubernetes.io/projected/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-kube-api-access-lglhg\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.266409 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-config-data\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.266602 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lglhg\" (UniqueName: \"kubernetes.io/projected/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-kube-api-access-lglhg\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.266842 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-combined-ca-bundle\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.272703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-combined-ca-bundle\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.274761 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-config-data\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.300648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lglhg\" (UniqueName: \"kubernetes.io/projected/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-kube-api-access-lglhg\") pod \"heat-db-sync-t4nvc\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.346615 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:36 crc kubenswrapper[4958]: I1008 08:21:36.813610 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-t4nvc"] Oct 08 08:21:36 crc kubenswrapper[4958]: W1008 08:21:36.814983 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1771a4ba_784c_41a1_b1e3_38ffc2e28ddd.slice/crio-9a46b0909db40376b333c157e86d75c0aad4d4b55c4352260b7a5dd90308ce47 WatchSource:0}: Error finding container 9a46b0909db40376b333c157e86d75c0aad4d4b55c4352260b7a5dd90308ce47: Status 404 returned error can't find the container with id 9a46b0909db40376b333c157e86d75c0aad4d4b55c4352260b7a5dd90308ce47 Oct 08 08:21:37 crc kubenswrapper[4958]: I1008 08:21:37.561490 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t4nvc" event={"ID":"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd","Type":"ContainerStarted","Data":"9a46b0909db40376b333c157e86d75c0aad4d4b55c4352260b7a5dd90308ce47"} Oct 08 08:21:41 crc kubenswrapper[4958]: I1008 08:21:41.539330 4958 scope.go:117] "RemoveContainer" containerID="da649a7e3b66deed0c06cb0bc95f237a8b0db6bc2fa3dede7115b0af1b71f0c9" Oct 08 08:21:41 crc kubenswrapper[4958]: I1008 08:21:41.785424 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:43 crc kubenswrapper[4958]: I1008 08:21:43.524710 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5db9c89c9d-78bjw" Oct 08 08:21:43 crc kubenswrapper[4958]: I1008 08:21:43.530132 4958 scope.go:117] "RemoveContainer" containerID="e0696f88960c6bd59a8d7f9ec8cdf25860671af3d8f862063f591bc9c21a5fa6" Oct 08 08:21:43 crc kubenswrapper[4958]: I1008 08:21:43.621004 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b6f87784b-7wwr6"] Oct 08 08:21:43 crc kubenswrapper[4958]: I1008 08:21:43.621228 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b6f87784b-7wwr6" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon-log" containerID="cri-o://9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36" gracePeriod=30 Oct 08 08:21:43 crc kubenswrapper[4958]: I1008 08:21:43.621314 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b6f87784b-7wwr6" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" containerID="cri-o://91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc" gracePeriod=30 Oct 08 08:21:44 crc kubenswrapper[4958]: I1008 08:21:44.044615 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2bw6q"] Oct 08 08:21:44 crc kubenswrapper[4958]: I1008 08:21:44.052442 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2bw6q"] Oct 08 08:21:44 crc kubenswrapper[4958]: I1008 08:21:44.683325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t4nvc" event={"ID":"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd","Type":"ContainerStarted","Data":"c26276d0fcda0eefc38418a6b22a1872f9163230b3e475a460b82ee0c1e90a9e"} Oct 08 08:21:44 crc kubenswrapper[4958]: I1008 08:21:44.707772 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-t4nvc" podStartSLOduration=2.222982463 podStartE2EDuration="9.707752253s" podCreationTimestamp="2025-10-08 08:21:35 +0000 UTC" firstStartedPulling="2025-10-08 08:21:36.818573085 +0000 UTC m=+6439.948265686" lastFinishedPulling="2025-10-08 08:21:44.303342875 +0000 UTC m=+6447.433035476" observedRunningTime="2025-10-08 08:21:44.698735339 +0000 UTC m=+6447.828427950" watchObservedRunningTime="2025-10-08 08:21:44.707752253 +0000 UTC m=+6447.837444864" Oct 08 08:21:45 crc kubenswrapper[4958]: I1008 08:21:45.590261 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f8f08a-b162-4b26-8cc2-3102a81310bf" path="/var/lib/kubelet/pods/67f8f08a-b162-4b26-8cc2-3102a81310bf/volumes" Oct 08 08:21:46 crc kubenswrapper[4958]: E1008 08:21:46.147591 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1771a4ba_784c_41a1_b1e3_38ffc2e28ddd.slice/crio-c26276d0fcda0eefc38418a6b22a1872f9163230b3e475a460b82ee0c1e90a9e.scope\": RecentStats: unable to find data in memory cache]" Oct 08 08:21:46 crc kubenswrapper[4958]: I1008 08:21:46.712694 4958 generic.go:334] "Generic (PLEG): container finished" podID="1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" containerID="c26276d0fcda0eefc38418a6b22a1872f9163230b3e475a460b82ee0c1e90a9e" exitCode=0 Oct 08 08:21:46 crc kubenswrapper[4958]: I1008 08:21:46.712759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t4nvc" event={"ID":"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd","Type":"ContainerDied","Data":"c26276d0fcda0eefc38418a6b22a1872f9163230b3e475a460b82ee0c1e90a9e"} Oct 08 08:21:47 crc kubenswrapper[4958]: I1008 08:21:47.729325 4958 generic.go:334] "Generic (PLEG): container finished" podID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerID="91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc" exitCode=0 Oct 08 08:21:47 crc kubenswrapper[4958]: I1008 08:21:47.729406 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6f87784b-7wwr6" event={"ID":"13a81014-1f64-436a-b3e0-1f8f79e3a7a0","Type":"ContainerDied","Data":"91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc"} Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.183846 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.302264 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-config-data\") pod \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.302620 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lglhg\" (UniqueName: \"kubernetes.io/projected/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-kube-api-access-lglhg\") pod \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.302835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-combined-ca-bundle\") pod \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\" (UID: \"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd\") " Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.325357 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-kube-api-access-lglhg" (OuterVolumeSpecName: "kube-api-access-lglhg") pod "1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" (UID: "1771a4ba-784c-41a1-b1e3-38ffc2e28ddd"). InnerVolumeSpecName "kube-api-access-lglhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.406734 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lglhg\" (UniqueName: \"kubernetes.io/projected/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-kube-api-access-lglhg\") on node \"crc\" DevicePath \"\"" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.484308 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" (UID: "1771a4ba-784c-41a1-b1e3-38ffc2e28ddd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.508143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-config-data" (OuterVolumeSpecName: "config-data") pod "1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" (UID: "1771a4ba-784c-41a1-b1e3-38ffc2e28ddd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.509594 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.509627 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.747564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t4nvc" event={"ID":"1771a4ba-784c-41a1-b1e3-38ffc2e28ddd","Type":"ContainerDied","Data":"9a46b0909db40376b333c157e86d75c0aad4d4b55c4352260b7a5dd90308ce47"} Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.747855 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a46b0909db40376b333c157e86d75c0aad4d4b55c4352260b7a5dd90308ce47" Oct 08 08:21:48 crc kubenswrapper[4958]: I1008 08:21:48.747938 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t4nvc" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.913172 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-57f66797bb-n4q6l"] Oct 08 08:21:49 crc kubenswrapper[4958]: E1008 08:21:49.913821 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" containerName="heat-db-sync" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.913835 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" containerName="heat-db-sync" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.914071 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" containerName="heat-db-sync" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.914671 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.922267 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.922543 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.922715 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nd76j" Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.937196 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-57f66797bb-n4q6l"] Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.996906 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77b75c75ff-7p4b7"] Oct 08 08:21:49 crc kubenswrapper[4958]: I1008 08:21:49.999741 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.007937 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.055623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77b75c75ff-7p4b7"] Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.076932 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmn5f\" (UniqueName: \"kubernetes.io/projected/1ed7c579-f8d9-4031-86e9-3335a67165cf-kube-api-access-zmn5f\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.077062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-combined-ca-bundle\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.077092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.077110 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data-custom\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.091477 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cbb648c89-zrpp4"] Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.092782 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.094790 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.103065 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cbb648c89-zrpp4"] Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178416 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-combined-ca-bundle\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-combined-ca-bundle\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178573 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data-custom\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178597 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data-custom\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178624 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.178646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288kq\" (UniqueName: \"kubernetes.io/projected/818841a4-5951-4c52-be2f-6f82b3bf170c-kube-api-access-288kq\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.179142 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmn5f\" (UniqueName: \"kubernetes.io/projected/1ed7c579-f8d9-4031-86e9-3335a67165cf-kube-api-access-zmn5f\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.184329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.185390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-combined-ca-bundle\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.203134 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data-custom\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.209375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmn5f\" (UniqueName: \"kubernetes.io/projected/1ed7c579-f8d9-4031-86e9-3335a67165cf-kube-api-access-zmn5f\") pod \"heat-engine-57f66797bb-n4q6l\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.244895 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.248756 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b6f87784b-7wwr6" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281251 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data-custom\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281326 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqln\" (UniqueName: \"kubernetes.io/projected/650ddf8a-8191-45ea-98a3-8f3d4102db84-kube-api-access-mvqln\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281405 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-combined-ca-bundle\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-combined-ca-bundle\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281583 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data-custom\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.281635 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288kq\" (UniqueName: \"kubernetes.io/projected/818841a4-5951-4c52-be2f-6f82b3bf170c-kube-api-access-288kq\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.289115 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data-custom\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.291560 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-combined-ca-bundle\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.293613 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.307316 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288kq\" (UniqueName: \"kubernetes.io/projected/818841a4-5951-4c52-be2f-6f82b3bf170c-kube-api-access-288kq\") pod \"heat-cfnapi-77b75c75ff-7p4b7\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.336132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.383576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data-custom\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.383638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvqln\" (UniqueName: \"kubernetes.io/projected/650ddf8a-8191-45ea-98a3-8f3d4102db84-kube-api-access-mvqln\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.383699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.383772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-combined-ca-bundle\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.392455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.393101 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data-custom\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.398507 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-combined-ca-bundle\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.402918 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvqln\" (UniqueName: \"kubernetes.io/projected/650ddf8a-8191-45ea-98a3-8f3d4102db84-kube-api-access-mvqln\") pod \"heat-api-5cbb648c89-zrpp4\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.416467 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.888974 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77b75c75ff-7p4b7"] Oct 08 08:21:50 crc kubenswrapper[4958]: I1008 08:21:50.890577 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.005148 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-57f66797bb-n4q6l"] Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.119618 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cbb648c89-zrpp4"] Oct 08 08:21:51 crc kubenswrapper[4958]: W1008 08:21:51.126976 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650ddf8a_8191_45ea_98a3_8f3d4102db84.slice/crio-a0951db4a480800e3f9481e6d13c0d418cd8b4d1fcddb507270675df1c231f1b WatchSource:0}: Error finding container a0951db4a480800e3f9481e6d13c0d418cd8b4d1fcddb507270675df1c231f1b: Status 404 returned error can't find the container with id a0951db4a480800e3f9481e6d13c0d418cd8b4d1fcddb507270675df1c231f1b Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.798569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-57f66797bb-n4q6l" event={"ID":"1ed7c579-f8d9-4031-86e9-3335a67165cf","Type":"ContainerStarted","Data":"2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3"} Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.798963 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.798976 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-57f66797bb-n4q6l" event={"ID":"1ed7c579-f8d9-4031-86e9-3335a67165cf","Type":"ContainerStarted","Data":"eccb224e6db92c372f1d7b5c308dc191f33de7c256561de2f773892b88076955"} Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.800585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" event={"ID":"818841a4-5951-4c52-be2f-6f82b3bf170c","Type":"ContainerStarted","Data":"d578325fc0e0c436a772ef4caa8578bcabbaa4b591b37cd565e65d8e250ecbef"} Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.801844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cbb648c89-zrpp4" event={"ID":"650ddf8a-8191-45ea-98a3-8f3d4102db84","Type":"ContainerStarted","Data":"a0951db4a480800e3f9481e6d13c0d418cd8b4d1fcddb507270675df1c231f1b"} Oct 08 08:21:51 crc kubenswrapper[4958]: I1008 08:21:51.825335 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-57f66797bb-n4q6l" podStartSLOduration=2.825318465 podStartE2EDuration="2.825318465s" podCreationTimestamp="2025-10-08 08:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:21:51.819100087 +0000 UTC m=+6454.948792688" watchObservedRunningTime="2025-10-08 08:21:51.825318465 +0000 UTC m=+6454.955011066" Oct 08 08:21:53 crc kubenswrapper[4958]: I1008 08:21:53.822393 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" event={"ID":"818841a4-5951-4c52-be2f-6f82b3bf170c","Type":"ContainerStarted","Data":"55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b"} Oct 08 08:21:53 crc kubenswrapper[4958]: I1008 08:21:53.823905 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:21:53 crc kubenswrapper[4958]: I1008 08:21:53.824011 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cbb648c89-zrpp4" event={"ID":"650ddf8a-8191-45ea-98a3-8f3d4102db84","Type":"ContainerStarted","Data":"e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54"} Oct 08 08:21:53 crc kubenswrapper[4958]: I1008 08:21:53.824243 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:21:53 crc kubenswrapper[4958]: I1008 08:21:53.847614 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" podStartSLOduration=2.732544572 podStartE2EDuration="4.847598434s" podCreationTimestamp="2025-10-08 08:21:49 +0000 UTC" firstStartedPulling="2025-10-08 08:21:50.89035812 +0000 UTC m=+6454.020050721" lastFinishedPulling="2025-10-08 08:21:53.005411992 +0000 UTC m=+6456.135104583" observedRunningTime="2025-10-08 08:21:53.840349457 +0000 UTC m=+6456.970042058" watchObservedRunningTime="2025-10-08 08:21:53.847598434 +0000 UTC m=+6456.977291035" Oct 08 08:21:53 crc kubenswrapper[4958]: I1008 08:21:53.864139 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cbb648c89-zrpp4" podStartSLOduration=1.983678797 podStartE2EDuration="3.864122212s" podCreationTimestamp="2025-10-08 08:21:50 +0000 UTC" firstStartedPulling="2025-10-08 08:21:51.129798848 +0000 UTC m=+6454.259491449" lastFinishedPulling="2025-10-08 08:21:53.010242263 +0000 UTC m=+6456.139934864" observedRunningTime="2025-10-08 08:21:53.863031522 +0000 UTC m=+6456.992724123" watchObservedRunningTime="2025-10-08 08:21:53.864122212 +0000 UTC m=+6456.993814803" Oct 08 08:21:54 crc kubenswrapper[4958]: I1008 08:21:54.031222 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-acfa-account-create-7m5qn"] Oct 08 08:21:54 crc kubenswrapper[4958]: I1008 08:21:54.045217 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-acfa-account-create-7m5qn"] Oct 08 08:21:55 crc kubenswrapper[4958]: I1008 08:21:55.589513 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c14b79-c6c8-4d47-a7ac-cb45766955c0" path="/var/lib/kubelet/pods/17c14b79-c6c8-4d47-a7ac-cb45766955c0/volumes" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.682856 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d9898f7cb-8bmkv"] Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.684537 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.697023 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d9898f7cb-8bmkv"] Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.714587 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5b649fd949-xrjcm"] Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.717207 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.737262 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-f95fb56bf-t9ljh"] Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.741614 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.772003 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b649fd949-xrjcm"] Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.791109 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f95fb56bf-t9ljh"] Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844097 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57pz5\" (UniqueName: \"kubernetes.io/projected/305b9d6b-591e-41e2-82a1-4fa6053b4f45-kube-api-access-57pz5\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844143 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844189 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srqn4\" (UniqueName: \"kubernetes.io/projected/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-kube-api-access-srqn4\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844395 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-combined-ca-bundle\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844535 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-config-data\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844600 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5l66\" (UniqueName: \"kubernetes.io/projected/4a15d996-db61-4e83-962f-d3e706e172db-kube-api-access-k5l66\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844726 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data-custom\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-combined-ca-bundle\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.844857 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-config-data-custom\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.845004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.845139 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data-custom\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.845193 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-combined-ca-bundle\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946589 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data-custom\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946648 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-combined-ca-bundle\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946679 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946703 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57pz5\" (UniqueName: \"kubernetes.io/projected/305b9d6b-591e-41e2-82a1-4fa6053b4f45-kube-api-access-57pz5\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srqn4\" (UniqueName: \"kubernetes.io/projected/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-kube-api-access-srqn4\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-combined-ca-bundle\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946881 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-config-data\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946917 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5l66\" (UniqueName: \"kubernetes.io/projected/4a15d996-db61-4e83-962f-d3e706e172db-kube-api-access-k5l66\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.946995 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data-custom\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.947037 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-combined-ca-bundle\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.947071 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-config-data-custom\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.947134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.955518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-combined-ca-bundle\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.965106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-config-data-custom\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.965505 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305b9d6b-591e-41e2-82a1-4fa6053b4f45-config-data\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.965545 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data-custom\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.965735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data-custom\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.966099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-combined-ca-bundle\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.967393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-combined-ca-bundle\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.967855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.969755 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5l66\" (UniqueName: \"kubernetes.io/projected/4a15d996-db61-4e83-962f-d3e706e172db-kube-api-access-k5l66\") pod \"heat-cfnapi-5b649fd949-xrjcm\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.970739 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57pz5\" (UniqueName: \"kubernetes.io/projected/305b9d6b-591e-41e2-82a1-4fa6053b4f45-kube-api-access-57pz5\") pod \"heat-engine-6d9898f7cb-8bmkv\" (UID: \"305b9d6b-591e-41e2-82a1-4fa6053b4f45\") " pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.971996 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:57 crc kubenswrapper[4958]: I1008 08:21:57.977308 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srqn4\" (UniqueName: \"kubernetes.io/projected/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-kube-api-access-srqn4\") pod \"heat-api-f95fb56bf-t9ljh\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.017435 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.035050 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.058837 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.531621 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d9898f7cb-8bmkv"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.629238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b649fd949-xrjcm"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.703283 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f95fb56bf-t9ljh"] Oct 08 08:21:58 crc kubenswrapper[4958]: W1008 08:21:58.710347 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd6edbfc_ab77_4f9c_b03a_c3100d4a96d8.slice/crio-e4ca73ff5aee8f73880a9ec75cbf1c5ab8335ddf0cd3efee6e9a8090e6d367b2 WatchSource:0}: Error finding container e4ca73ff5aee8f73880a9ec75cbf1c5ab8335ddf0cd3efee6e9a8090e6d367b2: Status 404 returned error can't find the container with id e4ca73ff5aee8f73880a9ec75cbf1c5ab8335ddf0cd3efee6e9a8090e6d367b2 Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.845402 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5cbb648c89-zrpp4"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.846012 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5cbb648c89-zrpp4" podUID="650ddf8a-8191-45ea-98a3-8f3d4102db84" containerName="heat-api" containerID="cri-o://e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54" gracePeriod=60 Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.886246 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77b75c75ff-7p4b7"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.886443 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerName="heat-cfnapi" containerID="cri-o://55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b" gracePeriod=60 Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.903137 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.134:8000/healthcheck\": EOF" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.903461 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" event={"ID":"4a15d996-db61-4e83-962f-d3e706e172db","Type":"ContainerStarted","Data":"fb39e7012b26f6ee350b610bf02860fe4a3f55d7d893b31ce3062b6f895ddc06"} Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.918997 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-67d857f9d-4g9rh"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.920301 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.924489 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.925096 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.925115 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67d857f9d-4g9rh"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.939919 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95fb56bf-t9ljh" event={"ID":"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8","Type":"ContainerStarted","Data":"e4ca73ff5aee8f73880a9ec75cbf1c5ab8335ddf0cd3efee6e9a8090e6d367b2"} Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.942091 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-85bf9f6694-scp7q"] Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.943702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.947828 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.952095 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.952388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d9898f7cb-8bmkv" event={"ID":"305b9d6b-591e-41e2-82a1-4fa6053b4f45","Type":"ContainerStarted","Data":"703c20f327d1aaa07abc2088d9760f7e3f24a982b6bdeafaeab3a8a0e2dbfc57"} Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.952428 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d9898f7cb-8bmkv" event={"ID":"305b9d6b-591e-41e2-82a1-4fa6053b4f45","Type":"ContainerStarted","Data":"b58c94fbcbcaf26f545e9be3b9fd9c89acbf3fff2c584c2b7870ade914998ace"} Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.953081 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:21:58 crc kubenswrapper[4958]: I1008 08:21:58.961513 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85bf9f6694-scp7q"] Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.015646 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d9898f7cb-8bmkv" podStartSLOduration=2.015630726 podStartE2EDuration="2.015630726s" podCreationTimestamp="2025-10-08 08:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:21:59.012775008 +0000 UTC m=+6462.142467609" watchObservedRunningTime="2025-10-08 08:21:59.015630726 +0000 UTC m=+6462.145323317" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.085208 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-internal-tls-certs\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.085252 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4g4\" (UniqueName: \"kubernetes.io/projected/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-kube-api-access-5g4g4\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.085283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp4fw\" (UniqueName: \"kubernetes.io/projected/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-kube-api-access-bp4fw\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.085322 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-public-tls-certs\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.085484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-config-data-custom\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.086302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-config-data\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.086799 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-config-data\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.086877 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-internal-tls-certs\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.087004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-combined-ca-bundle\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.087111 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-combined-ca-bundle\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.087238 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-public-tls-certs\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.087291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-config-data-custom\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189181 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-config-data\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-config-data\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-internal-tls-certs\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189369 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-combined-ca-bundle\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-combined-ca-bundle\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189483 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-public-tls-certs\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.189693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-config-data-custom\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.190143 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-internal-tls-certs\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.190179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4g4\" (UniqueName: \"kubernetes.io/projected/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-kube-api-access-5g4g4\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.190212 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp4fw\" (UniqueName: \"kubernetes.io/projected/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-kube-api-access-bp4fw\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.190831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-public-tls-certs\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.190922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-config-data-custom\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.200639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-config-data-custom\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.201575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-combined-ca-bundle\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.202379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-config-data-custom\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.207837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-internal-tls-certs\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.216466 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-internal-tls-certs\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.220132 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4g4\" (UniqueName: \"kubernetes.io/projected/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-kube-api-access-5g4g4\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.222348 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp4fw\" (UniqueName: \"kubernetes.io/projected/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-kube-api-access-bp4fw\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.222554 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-config-data\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.222800 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-config-data\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.231306 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-combined-ca-bundle\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.231589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5230e1f-f04d-4b47-9fde-33aa1f60ac05-public-tls-certs\") pod \"heat-cfnapi-67d857f9d-4g9rh\" (UID: \"f5230e1f-f04d-4b47-9fde-33aa1f60ac05\") " pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.241872 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f35596-ca09-49f1-8b9d-7244e7cc1ebc-public-tls-certs\") pod \"heat-api-85bf9f6694-scp7q\" (UID: \"09f35596-ca09-49f1-8b9d-7244e7cc1ebc\") " pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.330394 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:21:59 crc kubenswrapper[4958]: I1008 08:21:59.344184 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:21:59 crc kubenswrapper[4958]: W1008 08:21:59.966056 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5230e1f_f04d_4b47_9fde_33aa1f60ac05.slice/crio-b3072f525fdc404308240a66ad2bd60382b2cc4215949a171ddbf233964e7327 WatchSource:0}: Error finding container b3072f525fdc404308240a66ad2bd60382b2cc4215949a171ddbf233964e7327: Status 404 returned error can't find the container with id b3072f525fdc404308240a66ad2bd60382b2cc4215949a171ddbf233964e7327 Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.000136 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-67d857f9d-4g9rh"] Oct 08 08:22:00 crc kubenswrapper[4958]: W1008 08:22:00.008166 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f35596_ca09_49f1_8b9d_7244e7cc1ebc.slice/crio-6f1ccefad9e6596ddd36ec9c942b5aa78ff437cca615d1f5b0721b5535b9a6f2 WatchSource:0}: Error finding container 6f1ccefad9e6596ddd36ec9c942b5aa78ff437cca615d1f5b0721b5535b9a6f2: Status 404 returned error can't find the container with id 6f1ccefad9e6596ddd36ec9c942b5aa78ff437cca615d1f5b0721b5535b9a6f2 Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.008789 4958 generic.go:334] "Generic (PLEG): container finished" podID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerID="58e2193fa0748f4e3adcbe6a739734dbf7f20b291098458ea5174687be102590" exitCode=1 Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.008859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95fb56bf-t9ljh" event={"ID":"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8","Type":"ContainerDied","Data":"58e2193fa0748f4e3adcbe6a739734dbf7f20b291098458ea5174687be102590"} Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.009668 4958 scope.go:117] "RemoveContainer" containerID="58e2193fa0748f4e3adcbe6a739734dbf7f20b291098458ea5174687be102590" Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.060077 4958 generic.go:334] "Generic (PLEG): container finished" podID="4a15d996-db61-4e83-962f-d3e706e172db" containerID="eeb616b945c0819f3d19ddfa0a6459a9dc742dec2cb79eb66370d301d4185e07" exitCode=1 Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.061181 4958 scope.go:117] "RemoveContainer" containerID="eeb616b945c0819f3d19ddfa0a6459a9dc742dec2cb79eb66370d301d4185e07" Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.062032 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" event={"ID":"4a15d996-db61-4e83-962f-d3e706e172db","Type":"ContainerDied","Data":"eeb616b945c0819f3d19ddfa0a6459a9dc742dec2cb79eb66370d301d4185e07"} Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.082223 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-85bf9f6694-scp7q"] Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.259164 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b6f87784b-7wwr6" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Oct 08 08:22:00 crc kubenswrapper[4958]: I1008 08:22:00.702118 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.075097 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" event={"ID":"f5230e1f-f04d-4b47-9fde-33aa1f60ac05","Type":"ContainerStarted","Data":"ecdd11315bd1745c501c9efa47d29cdb336a0600236c4ac297f95fd4f7a3315f"} Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.075143 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" event={"ID":"f5230e1f-f04d-4b47-9fde-33aa1f60ac05","Type":"ContainerStarted","Data":"b3072f525fdc404308240a66ad2bd60382b2cc4215949a171ddbf233964e7327"} Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.075184 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.078811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85bf9f6694-scp7q" event={"ID":"09f35596-ca09-49f1-8b9d-7244e7cc1ebc","Type":"ContainerStarted","Data":"3edc8d8aa787a9370b912bfa4b0c1f3eb0ff7b5ad2454ba723d87eeb4f0a0900"} Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.078871 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-85bf9f6694-scp7q" event={"ID":"09f35596-ca09-49f1-8b9d-7244e7cc1ebc","Type":"ContainerStarted","Data":"6f1ccefad9e6596ddd36ec9c942b5aa78ff437cca615d1f5b0721b5535b9a6f2"} Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.078994 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.083295 4958 generic.go:334] "Generic (PLEG): container finished" podID="4a15d996-db61-4e83-962f-d3e706e172db" containerID="78f467a30cf792e928e5d33e97fdaee624cdbd1faead519dc9d4887db37a6cf7" exitCode=1 Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.083397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" event={"ID":"4a15d996-db61-4e83-962f-d3e706e172db","Type":"ContainerDied","Data":"78f467a30cf792e928e5d33e97fdaee624cdbd1faead519dc9d4887db37a6cf7"} Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.083434 4958 scope.go:117] "RemoveContainer" containerID="eeb616b945c0819f3d19ddfa0a6459a9dc742dec2cb79eb66370d301d4185e07" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.083863 4958 scope.go:117] "RemoveContainer" containerID="78f467a30cf792e928e5d33e97fdaee624cdbd1faead519dc9d4887db37a6cf7" Oct 08 08:22:01 crc kubenswrapper[4958]: E1008 08:22:01.084139 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b649fd949-xrjcm_openstack(4a15d996-db61-4e83-962f-d3e706e172db)\"" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" podUID="4a15d996-db61-4e83-962f-d3e706e172db" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.086451 4958 generic.go:334] "Generic (PLEG): container finished" podID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerID="a7d32d52829fade81506a7c6c73ee9d8e22af8b01de4848be487677a45fd8915" exitCode=1 Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.086482 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95fb56bf-t9ljh" event={"ID":"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8","Type":"ContainerDied","Data":"a7d32d52829fade81506a7c6c73ee9d8e22af8b01de4848be487677a45fd8915"} Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.087112 4958 scope.go:117] "RemoveContainer" containerID="a7d32d52829fade81506a7c6c73ee9d8e22af8b01de4848be487677a45fd8915" Oct 08 08:22:01 crc kubenswrapper[4958]: E1008 08:22:01.087376 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f95fb56bf-t9ljh_openstack(dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8)\"" pod="openstack/heat-api-f95fb56bf-t9ljh" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.107250 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" podStartSLOduration=3.107228843 podStartE2EDuration="3.107228843s" podCreationTimestamp="2025-10-08 08:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:22:01.104020306 +0000 UTC m=+6464.233712917" watchObservedRunningTime="2025-10-08 08:22:01.107228843 +0000 UTC m=+6464.236921444" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.144076 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-85bf9f6694-scp7q" podStartSLOduration=3.144054971 podStartE2EDuration="3.144054971s" podCreationTimestamp="2025-10-08 08:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:22:01.1307536 +0000 UTC m=+6464.260446201" watchObservedRunningTime="2025-10-08 08:22:01.144054971 +0000 UTC m=+6464.273747572" Oct 08 08:22:01 crc kubenswrapper[4958]: I1008 08:22:01.199578 4958 scope.go:117] "RemoveContainer" containerID="58e2193fa0748f4e3adcbe6a739734dbf7f20b291098458ea5174687be102590" Oct 08 08:22:02 crc kubenswrapper[4958]: I1008 08:22:02.100317 4958 scope.go:117] "RemoveContainer" containerID="78f467a30cf792e928e5d33e97fdaee624cdbd1faead519dc9d4887db37a6cf7" Oct 08 08:22:02 crc kubenswrapper[4958]: E1008 08:22:02.100972 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b649fd949-xrjcm_openstack(4a15d996-db61-4e83-962f-d3e706e172db)\"" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" podUID="4a15d996-db61-4e83-962f-d3e706e172db" Oct 08 08:22:02 crc kubenswrapper[4958]: I1008 08:22:02.103369 4958 scope.go:117] "RemoveContainer" containerID="a7d32d52829fade81506a7c6c73ee9d8e22af8b01de4848be487677a45fd8915" Oct 08 08:22:02 crc kubenswrapper[4958]: E1008 08:22:02.103926 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f95fb56bf-t9ljh_openstack(dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8)\"" pod="openstack/heat-api-f95fb56bf-t9ljh" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.035046 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.035346 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.042162 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rbgl9"] Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.052147 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rbgl9"] Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.059291 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.059340 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.110834 4958 scope.go:117] "RemoveContainer" containerID="78f467a30cf792e928e5d33e97fdaee624cdbd1faead519dc9d4887db37a6cf7" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.110887 4958 scope.go:117] "RemoveContainer" containerID="a7d32d52829fade81506a7c6c73ee9d8e22af8b01de4848be487677a45fd8915" Oct 08 08:22:03 crc kubenswrapper[4958]: E1008 08:22:03.111114 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5b649fd949-xrjcm_openstack(4a15d996-db61-4e83-962f-d3e706e172db)\"" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" podUID="4a15d996-db61-4e83-962f-d3e706e172db" Oct 08 08:22:03 crc kubenswrapper[4958]: E1008 08:22:03.111183 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f95fb56bf-t9ljh_openstack(dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8)\"" pod="openstack/heat-api-f95fb56bf-t9ljh" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" Oct 08 08:22:03 crc kubenswrapper[4958]: I1008 08:22:03.598596 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03647698-5b8c-4794-a7e4-62812753afd2" path="/var/lib/kubelet/pods/03647698-5b8c-4794-a7e4-62812753afd2/volumes" Oct 08 08:22:04 crc kubenswrapper[4958]: I1008 08:22:04.291326 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.134:8000/healthcheck\": read tcp 10.217.0.2:57014->10.217.1.134:8000: read: connection reset by peer" Oct 08 08:22:04 crc kubenswrapper[4958]: I1008 08:22:04.321554 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5cbb648c89-zrpp4" podUID="650ddf8a-8191-45ea-98a3-8f3d4102db84" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.135:8004/healthcheck\": read tcp 10.217.0.2:50936->10.217.1.135:8004: read: connection reset by peer" Oct 08 08:22:04 crc kubenswrapper[4958]: I1008 08:22:04.948338 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:22:04 crc kubenswrapper[4958]: I1008 08:22:04.959250 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.060662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data\") pod \"818841a4-5951-4c52-be2f-6f82b3bf170c\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.060831 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data-custom\") pod \"818841a4-5951-4c52-be2f-6f82b3bf170c\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.060973 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data\") pod \"650ddf8a-8191-45ea-98a3-8f3d4102db84\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.061005 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-combined-ca-bundle\") pod \"650ddf8a-8191-45ea-98a3-8f3d4102db84\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.061030 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288kq\" (UniqueName: \"kubernetes.io/projected/818841a4-5951-4c52-be2f-6f82b3bf170c-kube-api-access-288kq\") pod \"818841a4-5951-4c52-be2f-6f82b3bf170c\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.061047 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvqln\" (UniqueName: \"kubernetes.io/projected/650ddf8a-8191-45ea-98a3-8f3d4102db84-kube-api-access-mvqln\") pod \"650ddf8a-8191-45ea-98a3-8f3d4102db84\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.061131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data-custom\") pod \"650ddf8a-8191-45ea-98a3-8f3d4102db84\" (UID: \"650ddf8a-8191-45ea-98a3-8f3d4102db84\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.061195 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-combined-ca-bundle\") pod \"818841a4-5951-4c52-be2f-6f82b3bf170c\" (UID: \"818841a4-5951-4c52-be2f-6f82b3bf170c\") " Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.067348 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "818841a4-5951-4c52-be2f-6f82b3bf170c" (UID: "818841a4-5951-4c52-be2f-6f82b3bf170c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.073832 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818841a4-5951-4c52-be2f-6f82b3bf170c-kube-api-access-288kq" (OuterVolumeSpecName: "kube-api-access-288kq") pod "818841a4-5951-4c52-be2f-6f82b3bf170c" (UID: "818841a4-5951-4c52-be2f-6f82b3bf170c"). InnerVolumeSpecName "kube-api-access-288kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.076189 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650ddf8a-8191-45ea-98a3-8f3d4102db84-kube-api-access-mvqln" (OuterVolumeSpecName: "kube-api-access-mvqln") pod "650ddf8a-8191-45ea-98a3-8f3d4102db84" (UID: "650ddf8a-8191-45ea-98a3-8f3d4102db84"). InnerVolumeSpecName "kube-api-access-mvqln". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.079052 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "650ddf8a-8191-45ea-98a3-8f3d4102db84" (UID: "650ddf8a-8191-45ea-98a3-8f3d4102db84"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.104351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "650ddf8a-8191-45ea-98a3-8f3d4102db84" (UID: "650ddf8a-8191-45ea-98a3-8f3d4102db84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.107468 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "818841a4-5951-4c52-be2f-6f82b3bf170c" (UID: "818841a4-5951-4c52-be2f-6f82b3bf170c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.134280 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data" (OuterVolumeSpecName: "config-data") pod "650ddf8a-8191-45ea-98a3-8f3d4102db84" (UID: "650ddf8a-8191-45ea-98a3-8f3d4102db84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.135888 4958 generic.go:334] "Generic (PLEG): container finished" podID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerID="55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b" exitCode=0 Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.136042 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.135944 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" event={"ID":"818841a4-5951-4c52-be2f-6f82b3bf170c","Type":"ContainerDied","Data":"55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b"} Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.136196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77b75c75ff-7p4b7" event={"ID":"818841a4-5951-4c52-be2f-6f82b3bf170c","Type":"ContainerDied","Data":"d578325fc0e0c436a772ef4caa8578bcabbaa4b591b37cd565e65d8e250ecbef"} Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.136234 4958 scope.go:117] "RemoveContainer" containerID="55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.138568 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data" (OuterVolumeSpecName: "config-data") pod "818841a4-5951-4c52-be2f-6f82b3bf170c" (UID: "818841a4-5951-4c52-be2f-6f82b3bf170c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.138699 4958 generic.go:334] "Generic (PLEG): container finished" podID="650ddf8a-8191-45ea-98a3-8f3d4102db84" containerID="e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54" exitCode=0 Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.138733 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cbb648c89-zrpp4" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.138746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cbb648c89-zrpp4" event={"ID":"650ddf8a-8191-45ea-98a3-8f3d4102db84","Type":"ContainerDied","Data":"e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54"} Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.138777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cbb648c89-zrpp4" event={"ID":"650ddf8a-8191-45ea-98a3-8f3d4102db84","Type":"ContainerDied","Data":"a0951db4a480800e3f9481e6d13c0d418cd8b4d1fcddb507270675df1c231f1b"} Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.163918 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.169978 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.170019 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288kq\" (UniqueName: \"kubernetes.io/projected/818841a4-5951-4c52-be2f-6f82b3bf170c-kube-api-access-288kq\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.170034 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvqln\" (UniqueName: \"kubernetes.io/projected/650ddf8a-8191-45ea-98a3-8f3d4102db84-kube-api-access-mvqln\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.170045 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/650ddf8a-8191-45ea-98a3-8f3d4102db84-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.170055 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.165654 4958 scope.go:117] "RemoveContainer" containerID="55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.170067 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.170260 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/818841a4-5951-4c52-be2f-6f82b3bf170c-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:05 crc kubenswrapper[4958]: E1008 08:22:05.176359 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b\": container with ID starting with 55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b not found: ID does not exist" containerID="55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.176404 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b"} err="failed to get container status \"55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b\": rpc error: code = NotFound desc = could not find container \"55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b\": container with ID starting with 55642de54c706aa7adb36d4fb1519772a8315e6def08b5310c988091904c3b3b not found: ID does not exist" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.176429 4958 scope.go:117] "RemoveContainer" containerID="e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.188671 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5cbb648c89-zrpp4"] Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.198980 4958 scope.go:117] "RemoveContainer" containerID="e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54" Oct 08 08:22:05 crc kubenswrapper[4958]: E1008 08:22:05.199508 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54\": container with ID starting with e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54 not found: ID does not exist" containerID="e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.199664 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54"} err="failed to get container status \"e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54\": rpc error: code = NotFound desc = could not find container \"e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54\": container with ID starting with e89a1be22cd48726d867bd36219c26b696d2e0cc858a9437832c0fbb4fd1ae54 not found: ID does not exist" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.205161 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5cbb648c89-zrpp4"] Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.482305 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77b75c75ff-7p4b7"] Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.492725 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-77b75c75ff-7p4b7"] Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.592171 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650ddf8a-8191-45ea-98a3-8f3d4102db84" path="/var/lib/kubelet/pods/650ddf8a-8191-45ea-98a3-8f3d4102db84/volumes" Oct 08 08:22:05 crc kubenswrapper[4958]: I1008 08:22:05.593096 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" path="/var/lib/kubelet/pods/818841a4-5951-4c52-be2f-6f82b3bf170c/volumes" Oct 08 08:22:06 crc kubenswrapper[4958]: I1008 08:22:06.844419 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:22:06 crc kubenswrapper[4958]: I1008 08:22:06.844735 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:22:08 crc kubenswrapper[4958]: I1008 08:22:08.078569 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d9898f7cb-8bmkv" Oct 08 08:22:08 crc kubenswrapper[4958]: I1008 08:22:08.158223 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-57f66797bb-n4q6l"] Oct 08 08:22:08 crc kubenswrapper[4958]: I1008 08:22:08.158482 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-57f66797bb-n4q6l" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerName="heat-engine" containerID="cri-o://2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" gracePeriod=60 Oct 08 08:22:08 crc kubenswrapper[4958]: E1008 08:22:08.164439 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 08:22:08 crc kubenswrapper[4958]: E1008 08:22:08.166584 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 08:22:08 crc kubenswrapper[4958]: E1008 08:22:08.170413 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 08:22:08 crc kubenswrapper[4958]: E1008 08:22:08.170519 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-57f66797bb-n4q6l" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerName="heat-engine" Oct 08 08:22:10 crc kubenswrapper[4958]: I1008 08:22:10.248868 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b6f87784b-7wwr6" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.125:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.125:8443: connect: connection refused" Oct 08 08:22:10 crc kubenswrapper[4958]: I1008 08:22:10.249547 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:22:10 crc kubenswrapper[4958]: E1008 08:22:10.250301 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 08:22:10 crc kubenswrapper[4958]: E1008 08:22:10.253169 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 08:22:10 crc kubenswrapper[4958]: E1008 08:22:10.256516 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 08 08:22:10 crc kubenswrapper[4958]: E1008 08:22:10.256609 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-57f66797bb-n4q6l" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerName="heat-engine" Oct 08 08:22:10 crc kubenswrapper[4958]: I1008 08:22:10.637973 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-67d857f9d-4g9rh" Oct 08 08:22:10 crc kubenswrapper[4958]: I1008 08:22:10.642570 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-85bf9f6694-scp7q" Oct 08 08:22:10 crc kubenswrapper[4958]: I1008 08:22:10.710495 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5b649fd949-xrjcm"] Oct 08 08:22:10 crc kubenswrapper[4958]: I1008 08:22:10.737753 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f95fb56bf-t9ljh"] Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.228899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" event={"ID":"4a15d996-db61-4e83-962f-d3e706e172db","Type":"ContainerDied","Data":"fb39e7012b26f6ee350b610bf02860fe4a3f55d7d893b31ce3062b6f895ddc06"} Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.229155 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb39e7012b26f6ee350b610bf02860fe4a3f55d7d893b31ce3062b6f895ddc06" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.230466 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f95fb56bf-t9ljh" event={"ID":"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8","Type":"ContainerDied","Data":"e4ca73ff5aee8f73880a9ec75cbf1c5ab8335ddf0cd3efee6e9a8090e6d367b2"} Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.230484 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ca73ff5aee8f73880a9ec75cbf1c5ab8335ddf0cd3efee6e9a8090e6d367b2" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.249310 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.254202 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data-custom\") pod \"4a15d996-db61-4e83-962f-d3e706e172db\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342209 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-combined-ca-bundle\") pod \"4a15d996-db61-4e83-962f-d3e706e172db\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data\") pod \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342404 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5l66\" (UniqueName: \"kubernetes.io/projected/4a15d996-db61-4e83-962f-d3e706e172db-kube-api-access-k5l66\") pod \"4a15d996-db61-4e83-962f-d3e706e172db\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342470 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data\") pod \"4a15d996-db61-4e83-962f-d3e706e172db\" (UID: \"4a15d996-db61-4e83-962f-d3e706e172db\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data-custom\") pod \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342538 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-combined-ca-bundle\") pod \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.342560 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srqn4\" (UniqueName: \"kubernetes.io/projected/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-kube-api-access-srqn4\") pod \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\" (UID: \"dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8\") " Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.348626 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-kube-api-access-srqn4" (OuterVolumeSpecName: "kube-api-access-srqn4") pod "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" (UID: "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8"). InnerVolumeSpecName "kube-api-access-srqn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.348740 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" (UID: "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.362132 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a15d996-db61-4e83-962f-d3e706e172db-kube-api-access-k5l66" (OuterVolumeSpecName: "kube-api-access-k5l66") pod "4a15d996-db61-4e83-962f-d3e706e172db" (UID: "4a15d996-db61-4e83-962f-d3e706e172db"). InnerVolumeSpecName "kube-api-access-k5l66". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.362860 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a15d996-db61-4e83-962f-d3e706e172db" (UID: "4a15d996-db61-4e83-962f-d3e706e172db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.376213 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" (UID: "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.383323 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a15d996-db61-4e83-962f-d3e706e172db" (UID: "4a15d996-db61-4e83-962f-d3e706e172db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.402317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data" (OuterVolumeSpecName: "config-data") pod "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" (UID: "dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.426664 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data" (OuterVolumeSpecName: "config-data") pod "4a15d996-db61-4e83-962f-d3e706e172db" (UID: "4a15d996-db61-4e83-962f-d3e706e172db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445420 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445465 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5l66\" (UniqueName: \"kubernetes.io/projected/4a15d996-db61-4e83-962f-d3e706e172db-kube-api-access-k5l66\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445479 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445493 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445505 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445516 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srqn4\" (UniqueName: \"kubernetes.io/projected/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8-kube-api-access-srqn4\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445528 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:11 crc kubenswrapper[4958]: I1008 08:22:11.445539 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a15d996-db61-4e83-962f-d3e706e172db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:12 crc kubenswrapper[4958]: I1008 08:22:12.239890 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f95fb56bf-t9ljh" Oct 08 08:22:12 crc kubenswrapper[4958]: I1008 08:22:12.239896 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b649fd949-xrjcm" Oct 08 08:22:12 crc kubenswrapper[4958]: I1008 08:22:12.266121 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5b649fd949-xrjcm"] Oct 08 08:22:12 crc kubenswrapper[4958]: I1008 08:22:12.273323 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5b649fd949-xrjcm"] Oct 08 08:22:12 crc kubenswrapper[4958]: I1008 08:22:12.280782 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f95fb56bf-t9ljh"] Oct 08 08:22:12 crc kubenswrapper[4958]: I1008 08:22:12.290228 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-f95fb56bf-t9ljh"] Oct 08 08:22:13 crc kubenswrapper[4958]: I1008 08:22:13.637007 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a15d996-db61-4e83-962f-d3e706e172db" path="/var/lib/kubelet/pods/4a15d996-db61-4e83-962f-d3e706e172db/volumes" Oct 08 08:22:13 crc kubenswrapper[4958]: I1008 08:22:13.639136 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" path="/var/lib/kubelet/pods/dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8/volumes" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.182635 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.263112 4958 generic.go:334] "Generic (PLEG): container finished" podID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerID="9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36" exitCode=137 Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.263165 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6f87784b-7wwr6" event={"ID":"13a81014-1f64-436a-b3e0-1f8f79e3a7a0","Type":"ContainerDied","Data":"9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36"} Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.263200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6f87784b-7wwr6" event={"ID":"13a81014-1f64-436a-b3e0-1f8f79e3a7a0","Type":"ContainerDied","Data":"2233b86fbc6dbc4970d9cb0cf4258e71812e930d6b3b18a3ab12b3c7736d28f3"} Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.263223 4958 scope.go:117] "RemoveContainer" containerID="91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.263240 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6f87784b-7wwr6" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315238 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-logs\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315323 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-scripts\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315372 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-config-data\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-secret-key\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315531 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-combined-ca-bundle\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315617 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-kube-api-access-v7w9j\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.315706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-tls-certs\") pod \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\" (UID: \"13a81014-1f64-436a-b3e0-1f8f79e3a7a0\") " Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.318633 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-logs" (OuterVolumeSpecName: "logs") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.321879 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.323912 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-kube-api-access-v7w9j" (OuterVolumeSpecName: "kube-api-access-v7w9j") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "kube-api-access-v7w9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.365066 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-config-data" (OuterVolumeSpecName: "config-data") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.367194 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-scripts" (OuterVolumeSpecName: "scripts") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.376538 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.403179 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "13a81014-1f64-436a-b3e0-1f8f79e3a7a0" (UID: "13a81014-1f64-436a-b3e0-1f8f79e3a7a0"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419538 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-logs\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419607 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419625 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419643 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419663 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419679 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7w9j\" (UniqueName: \"kubernetes.io/projected/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-kube-api-access-v7w9j\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.419696 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a81014-1f64-436a-b3e0-1f8f79e3a7a0-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.459668 4958 scope.go:117] "RemoveContainer" containerID="9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.489093 4958 scope.go:117] "RemoveContainer" containerID="91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc" Oct 08 08:22:14 crc kubenswrapper[4958]: E1008 08:22:14.489708 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc\": container with ID starting with 91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc not found: ID does not exist" containerID="91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.489764 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc"} err="failed to get container status \"91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc\": rpc error: code = NotFound desc = could not find container \"91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc\": container with ID starting with 91150fa63e0db130336ee5a85cd25fa4d00f6b2f64622437b68a811f247d0bbc not found: ID does not exist" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.489797 4958 scope.go:117] "RemoveContainer" containerID="9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36" Oct 08 08:22:14 crc kubenswrapper[4958]: E1008 08:22:14.490193 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36\": container with ID starting with 9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36 not found: ID does not exist" containerID="9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.490229 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36"} err="failed to get container status \"9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36\": rpc error: code = NotFound desc = could not find container \"9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36\": container with ID starting with 9de6baa2b5ebf8471a4de3d0b5a5dcaf4591cff08c2f4210d28ebb569a32bd36 not found: ID does not exist" Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.615102 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b6f87784b-7wwr6"] Oct 08 08:22:14 crc kubenswrapper[4958]: I1008 08:22:14.626174 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b6f87784b-7wwr6"] Oct 08 08:22:15 crc kubenswrapper[4958]: I1008 08:22:15.589752 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" path="/var/lib/kubelet/pods/13a81014-1f64-436a-b3e0-1f8f79e3a7a0/volumes" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.321083 4958 generic.go:334] "Generic (PLEG): container finished" podID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" exitCode=0 Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.321345 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-57f66797bb-n4q6l" event={"ID":"1ed7c579-f8d9-4031-86e9-3335a67165cf","Type":"ContainerDied","Data":"2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3"} Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.321910 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-57f66797bb-n4q6l" event={"ID":"1ed7c579-f8d9-4031-86e9-3335a67165cf","Type":"ContainerDied","Data":"eccb224e6db92c372f1d7b5c308dc191f33de7c256561de2f773892b88076955"} Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.321938 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccb224e6db92c372f1d7b5c308dc191f33de7c256561de2f773892b88076955" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.382934 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.523833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data-custom\") pod \"1ed7c579-f8d9-4031-86e9-3335a67165cf\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.524080 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data\") pod \"1ed7c579-f8d9-4031-86e9-3335a67165cf\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.524131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmn5f\" (UniqueName: \"kubernetes.io/projected/1ed7c579-f8d9-4031-86e9-3335a67165cf-kube-api-access-zmn5f\") pod \"1ed7c579-f8d9-4031-86e9-3335a67165cf\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.524210 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-combined-ca-bundle\") pod \"1ed7c579-f8d9-4031-86e9-3335a67165cf\" (UID: \"1ed7c579-f8d9-4031-86e9-3335a67165cf\") " Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.531484 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed7c579-f8d9-4031-86e9-3335a67165cf-kube-api-access-zmn5f" (OuterVolumeSpecName: "kube-api-access-zmn5f") pod "1ed7c579-f8d9-4031-86e9-3335a67165cf" (UID: "1ed7c579-f8d9-4031-86e9-3335a67165cf"). InnerVolumeSpecName "kube-api-access-zmn5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.532183 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ed7c579-f8d9-4031-86e9-3335a67165cf" (UID: "1ed7c579-f8d9-4031-86e9-3335a67165cf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.586454 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ed7c579-f8d9-4031-86e9-3335a67165cf" (UID: "1ed7c579-f8d9-4031-86e9-3335a67165cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.600350 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data" (OuterVolumeSpecName: "config-data") pod "1ed7c579-f8d9-4031-86e9-3335a67165cf" (UID: "1ed7c579-f8d9-4031-86e9-3335a67165cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.627570 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmn5f\" (UniqueName: \"kubernetes.io/projected/1ed7c579-f8d9-4031-86e9-3335a67165cf-kube-api-access-zmn5f\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.627621 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.627642 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:18 crc kubenswrapper[4958]: I1008 08:22:18.627660 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed7c579-f8d9-4031-86e9-3335a67165cf-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:19 crc kubenswrapper[4958]: I1008 08:22:19.335036 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-57f66797bb-n4q6l" Oct 08 08:22:19 crc kubenswrapper[4958]: I1008 08:22:19.396187 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-57f66797bb-n4q6l"] Oct 08 08:22:19 crc kubenswrapper[4958]: I1008 08:22:19.409661 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-57f66797bb-n4q6l"] Oct 08 08:22:19 crc kubenswrapper[4958]: I1008 08:22:19.597520 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" path="/var/lib/kubelet/pods/1ed7c579-f8d9-4031-86e9-3335a67165cf/volumes" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.428664 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x"] Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429643 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429656 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429670 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429677 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429687 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429694 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429707 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a15d996-db61-4e83-962f-d3e706e172db" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429712 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a15d996-db61-4e83-962f-d3e706e172db" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429723 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650ddf8a-8191-45ea-98a3-8f3d4102db84" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429728 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="650ddf8a-8191-45ea-98a3-8f3d4102db84" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429740 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon-log" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429747 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon-log" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429756 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429762 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429772 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerName="heat-engine" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429778 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerName="heat-engine" Oct 08 08:22:31 crc kubenswrapper[4958]: E1008 08:22:31.429793 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a15d996-db61-4e83-962f-d3e706e172db" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429798 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a15d996-db61-4e83-962f-d3e706e172db" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.429990 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon-log" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430005 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="818841a4-5951-4c52-be2f-6f82b3bf170c" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430017 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a15d996-db61-4e83-962f-d3e706e172db" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430033 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430042 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6edbfc-ab77-4f9c-b03a-c3100d4a96d8" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430053 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed7c579-f8d9-4031-86e9-3335a67165cf" containerName="heat-engine" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430068 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="650ddf8a-8191-45ea-98a3-8f3d4102db84" containerName="heat-api" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430076 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a81014-1f64-436a-b3e0-1f8f79e3a7a0" containerName="horizon" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.430449 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a15d996-db61-4e83-962f-d3e706e172db" containerName="heat-cfnapi" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.431447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.435250 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.446928 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x"] Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.455105 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.455243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tvrr\" (UniqueName: \"kubernetes.io/projected/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-kube-api-access-7tvrr\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.455377 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.557268 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.557390 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tvrr\" (UniqueName: \"kubernetes.io/projected/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-kube-api-access-7tvrr\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.557479 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.558047 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.558387 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.593009 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tvrr\" (UniqueName: \"kubernetes.io/projected/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-kube-api-access-7tvrr\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:31 crc kubenswrapper[4958]: I1008 08:22:31.769867 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:32 crc kubenswrapper[4958]: I1008 08:22:32.129605 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x"] Oct 08 08:22:32 crc kubenswrapper[4958]: I1008 08:22:32.503101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" event={"ID":"a06b7639-7a8f-4e12-823d-d79e87e8c8ca","Type":"ContainerStarted","Data":"b46b8f2524af8cf2cb045538cf9506253fd634d8503b73f0aecee86ae39b8d3c"} Oct 08 08:22:32 crc kubenswrapper[4958]: I1008 08:22:32.504529 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" event={"ID":"a06b7639-7a8f-4e12-823d-d79e87e8c8ca","Type":"ContainerStarted","Data":"d5d5b0fc323bf769a790433c092161eb5db5fbdb197dee798d6464bb1036a293"} Oct 08 08:22:33 crc kubenswrapper[4958]: I1008 08:22:33.565720 4958 generic.go:334] "Generic (PLEG): container finished" podID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerID="b46b8f2524af8cf2cb045538cf9506253fd634d8503b73f0aecee86ae39b8d3c" exitCode=0 Oct 08 08:22:33 crc kubenswrapper[4958]: I1008 08:22:33.565784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" event={"ID":"a06b7639-7a8f-4e12-823d-d79e87e8c8ca","Type":"ContainerDied","Data":"b46b8f2524af8cf2cb045538cf9506253fd634d8503b73f0aecee86ae39b8d3c"} Oct 08 08:22:34 crc kubenswrapper[4958]: I1008 08:22:34.065032 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qnfbz"] Oct 08 08:22:34 crc kubenswrapper[4958]: I1008 08:22:34.078875 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qnfbz"] Oct 08 08:22:35 crc kubenswrapper[4958]: I1008 08:22:35.591532 4958 generic.go:334] "Generic (PLEG): container finished" podID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerID="d41188f9ccb344dabe582cf651a7184814e00be95ee1e0a582368a70a1007acc" exitCode=0 Oct 08 08:22:35 crc kubenswrapper[4958]: I1008 08:22:35.592289 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339c698b-4c11-45e7-99b7-6868ac04f0da" path="/var/lib/kubelet/pods/339c698b-4c11-45e7-99b7-6868ac04f0da/volumes" Oct 08 08:22:35 crc kubenswrapper[4958]: I1008 08:22:35.594131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" event={"ID":"a06b7639-7a8f-4e12-823d-d79e87e8c8ca","Type":"ContainerDied","Data":"d41188f9ccb344dabe582cf651a7184814e00be95ee1e0a582368a70a1007acc"} Oct 08 08:22:36 crc kubenswrapper[4958]: I1008 08:22:36.610871 4958 generic.go:334] "Generic (PLEG): container finished" podID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerID="55a45221252c29f552290276c458cd3b29306df51babadd938151194fdf2ecda" exitCode=0 Oct 08 08:22:36 crc kubenswrapper[4958]: I1008 08:22:36.611388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" event={"ID":"a06b7639-7a8f-4e12-823d-d79e87e8c8ca","Type":"ContainerDied","Data":"55a45221252c29f552290276c458cd3b29306df51babadd938151194fdf2ecda"} Oct 08 08:22:36 crc kubenswrapper[4958]: I1008 08:22:36.845273 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:22:36 crc kubenswrapper[4958]: I1008 08:22:36.845359 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:22:37 crc kubenswrapper[4958]: I1008 08:22:37.993984 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.009634 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tvrr\" (UniqueName: \"kubernetes.io/projected/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-kube-api-access-7tvrr\") pod \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.009883 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-util\") pod \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.010077 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-bundle\") pod \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\" (UID: \"a06b7639-7a8f-4e12-823d-d79e87e8c8ca\") " Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.016096 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-bundle" (OuterVolumeSpecName: "bundle") pod "a06b7639-7a8f-4e12-823d-d79e87e8c8ca" (UID: "a06b7639-7a8f-4e12-823d-d79e87e8c8ca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.020247 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-kube-api-access-7tvrr" (OuterVolumeSpecName: "kube-api-access-7tvrr") pod "a06b7639-7a8f-4e12-823d-d79e87e8c8ca" (UID: "a06b7639-7a8f-4e12-823d-d79e87e8c8ca"). InnerVolumeSpecName "kube-api-access-7tvrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.026488 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-util" (OuterVolumeSpecName: "util") pod "a06b7639-7a8f-4e12-823d-d79e87e8c8ca" (UID: "a06b7639-7a8f-4e12-823d-d79e87e8c8ca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.115305 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tvrr\" (UniqueName: \"kubernetes.io/projected/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-kube-api-access-7tvrr\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.115329 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-util\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.115339 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a06b7639-7a8f-4e12-823d-d79e87e8c8ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.639867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" event={"ID":"a06b7639-7a8f-4e12-823d-d79e87e8c8ca","Type":"ContainerDied","Data":"d5d5b0fc323bf769a790433c092161eb5db5fbdb197dee798d6464bb1036a293"} Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.640220 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d5b0fc323bf769a790433c092161eb5db5fbdb197dee798d6464bb1036a293" Oct 08 08:22:38 crc kubenswrapper[4958]: I1008 08:22:38.639924 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x" Oct 08 08:22:43 crc kubenswrapper[4958]: I1008 08:22:43.051359 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e266-account-create-lh8tw"] Oct 08 08:22:43 crc kubenswrapper[4958]: I1008 08:22:43.060108 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e266-account-create-lh8tw"] Oct 08 08:22:43 crc kubenswrapper[4958]: I1008 08:22:43.634689 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad898a7-a520-4df3-a85c-9fa7182286cf" path="/var/lib/kubelet/pods/5ad898a7-a520-4df3-a85c-9fa7182286cf/volumes" Oct 08 08:22:44 crc kubenswrapper[4958]: I1008 08:22:44.313786 4958 scope.go:117] "RemoveContainer" containerID="9d969ec44549014e9979dea3b938e61a52572574dde68a6727eb5a0c087714cc" Oct 08 08:22:44 crc kubenswrapper[4958]: I1008 08:22:44.366421 4958 scope.go:117] "RemoveContainer" containerID="4d9eb497cd07b6efaef4ab492e1d246cb89a1edda44392542bb5dc019bdea167" Oct 08 08:22:44 crc kubenswrapper[4958]: I1008 08:22:44.420555 4958 scope.go:117] "RemoveContainer" containerID="70d68eea314efdc7641a8d1f8aa8b4dd755495e6ced34fa6780723e2cd47a1f3" Oct 08 08:22:44 crc kubenswrapper[4958]: I1008 08:22:44.469872 4958 scope.go:117] "RemoveContainer" containerID="3d1730d7f164127e1f402fc206425c582aae62bf93d5f72c1a9bbfba6a3b8d00" Oct 08 08:22:44 crc kubenswrapper[4958]: I1008 08:22:44.521165 4958 scope.go:117] "RemoveContainer" containerID="9159fca489b7bdce707a6935f4e0a36140a52121b3b73df5ce9001efdf6c04c1" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.038183 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-r5fgj"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.045527 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-r5fgj"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.298269 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft"] Oct 08 08:22:50 crc kubenswrapper[4958]: E1008 08:22:50.298757 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="pull" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.298779 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="pull" Oct 08 08:22:50 crc kubenswrapper[4958]: E1008 08:22:50.298805 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="util" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.298813 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="util" Oct 08 08:22:50 crc kubenswrapper[4958]: E1008 08:22:50.298823 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="extract" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.298830 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="extract" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.299154 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06b7639-7a8f-4e12-823d-d79e87e8c8ca" containerName="extract" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.299938 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.303690 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.303910 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-d8fhg" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.308341 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.318740 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.353530 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.354763 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.363345 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.365262 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.365758 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-66s7l" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.377356 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.385658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4x2\" (UniqueName: \"kubernetes.io/projected/bccd6a58-4d99-48b2-8be1-a06433166b30-kube-api-access-hl4x2\") pod \"obo-prometheus-operator-7c8cf85677-f29ft\" (UID: \"bccd6a58-4d99-48b2-8be1-a06433166b30\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.391859 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.425087 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.488925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4x2\" (UniqueName: \"kubernetes.io/projected/bccd6a58-4d99-48b2-8be1-a06433166b30-kube-api-access-hl4x2\") pod \"obo-prometheus-operator-7c8cf85677-f29ft\" (UID: \"bccd6a58-4d99-48b2-8be1-a06433166b30\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.489047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30fb4dfe-018c-4772-b527-7955cab889da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-6229j\" (UID: \"30fb4dfe-018c-4772-b527-7955cab889da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.489132 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b191752-8192-4f54-9b6e-f3027b9e1104-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-s98xq\" (UID: \"7b191752-8192-4f54-9b6e-f3027b9e1104\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.489197 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b191752-8192-4f54-9b6e-f3027b9e1104-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-s98xq\" (UID: \"7b191752-8192-4f54-9b6e-f3027b9e1104\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.489241 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30fb4dfe-018c-4772-b527-7955cab889da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-6229j\" (UID: \"30fb4dfe-018c-4772-b527-7955cab889da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.517823 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4x2\" (UniqueName: \"kubernetes.io/projected/bccd6a58-4d99-48b2-8be1-a06433166b30-kube-api-access-hl4x2\") pod \"obo-prometheus-operator-7c8cf85677-f29ft\" (UID: \"bccd6a58-4d99-48b2-8be1-a06433166b30\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.521683 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-29nr5"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.522924 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.525639 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-t675d" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.526890 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.541109 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-29nr5"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.593798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b191752-8192-4f54-9b6e-f3027b9e1104-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-s98xq\" (UID: \"7b191752-8192-4f54-9b6e-f3027b9e1104\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.593891 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30fb4dfe-018c-4772-b527-7955cab889da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-6229j\" (UID: \"30fb4dfe-018c-4772-b527-7955cab889da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.594144 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30fb4dfe-018c-4772-b527-7955cab889da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-6229j\" (UID: \"30fb4dfe-018c-4772-b527-7955cab889da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.594308 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b191752-8192-4f54-9b6e-f3027b9e1104-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-s98xq\" (UID: \"7b191752-8192-4f54-9b6e-f3027b9e1104\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.600801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30fb4dfe-018c-4772-b527-7955cab889da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-6229j\" (UID: \"30fb4dfe-018c-4772-b527-7955cab889da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.603131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30fb4dfe-018c-4772-b527-7955cab889da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-6229j\" (UID: \"30fb4dfe-018c-4772-b527-7955cab889da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.607038 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b191752-8192-4f54-9b6e-f3027b9e1104-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-s98xq\" (UID: \"7b191752-8192-4f54-9b6e-f3027b9e1104\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.618548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b191752-8192-4f54-9b6e-f3027b9e1104-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b988d98c-s98xq\" (UID: \"7b191752-8192-4f54-9b6e-f3027b9e1104\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.631012 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.642156 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-2vdst"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.646020 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.652774 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zx9ss" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.657524 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-2vdst"] Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.676345 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.695780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fe7fd1d-f173-4506-81a2-031a921210f7-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-29nr5\" (UID: \"1fe7fd1d-f173-4506-81a2-031a921210f7\") " pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.695885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfz5j\" (UniqueName: \"kubernetes.io/projected/1fe7fd1d-f173-4506-81a2-031a921210f7-kube-api-access-kfz5j\") pod \"observability-operator-cc5f78dfc-29nr5\" (UID: \"1fe7fd1d-f173-4506-81a2-031a921210f7\") " pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.711749 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.801753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfz5j\" (UniqueName: \"kubernetes.io/projected/1fe7fd1d-f173-4506-81a2-031a921210f7-kube-api-access-kfz5j\") pod \"observability-operator-cc5f78dfc-29nr5\" (UID: \"1fe7fd1d-f173-4506-81a2-031a921210f7\") " pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.802155 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0151dc13-9ece-481a-9c20-158d3d50af3a-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-2vdst\" (UID: \"0151dc13-9ece-481a-9c20-158d3d50af3a\") " pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.802188 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpdb\" (UniqueName: \"kubernetes.io/projected/0151dc13-9ece-481a-9c20-158d3d50af3a-kube-api-access-xzpdb\") pod \"perses-operator-54bc95c9fb-2vdst\" (UID: \"0151dc13-9ece-481a-9c20-158d3d50af3a\") " pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.802234 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fe7fd1d-f173-4506-81a2-031a921210f7-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-29nr5\" (UID: \"1fe7fd1d-f173-4506-81a2-031a921210f7\") " pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.807175 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fe7fd1d-f173-4506-81a2-031a921210f7-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-29nr5\" (UID: \"1fe7fd1d-f173-4506-81a2-031a921210f7\") " pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.822480 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfz5j\" (UniqueName: \"kubernetes.io/projected/1fe7fd1d-f173-4506-81a2-031a921210f7-kube-api-access-kfz5j\") pod \"observability-operator-cc5f78dfc-29nr5\" (UID: \"1fe7fd1d-f173-4506-81a2-031a921210f7\") " pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.901018 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.903815 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0151dc13-9ece-481a-9c20-158d3d50af3a-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-2vdst\" (UID: \"0151dc13-9ece-481a-9c20-158d3d50af3a\") " pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.903862 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzpdb\" (UniqueName: \"kubernetes.io/projected/0151dc13-9ece-481a-9c20-158d3d50af3a-kube-api-access-xzpdb\") pod \"perses-operator-54bc95c9fb-2vdst\" (UID: \"0151dc13-9ece-481a-9c20-158d3d50af3a\") " pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.904640 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0151dc13-9ece-481a-9c20-158d3d50af3a-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-2vdst\" (UID: \"0151dc13-9ece-481a-9c20-158d3d50af3a\") " pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:50 crc kubenswrapper[4958]: I1008 08:22:50.922089 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzpdb\" (UniqueName: \"kubernetes.io/projected/0151dc13-9ece-481a-9c20-158d3d50af3a-kube-api-access-xzpdb\") pod \"perses-operator-54bc95c9fb-2vdst\" (UID: \"0151dc13-9ece-481a-9c20-158d3d50af3a\") " pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.132754 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.152695 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft"] Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.175449 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq"] Oct 08 08:22:51 crc kubenswrapper[4958]: W1008 08:22:51.183200 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b191752_8192_4f54_9b6e_f3027b9e1104.slice/crio-09627bab6c74890126d685516161631f9a5bfe18d2d5c5cf649af954d9f2b12a WatchSource:0}: Error finding container 09627bab6c74890126d685516161631f9a5bfe18d2d5c5cf649af954d9f2b12a: Status 404 returned error can't find the container with id 09627bab6c74890126d685516161631f9a5bfe18d2d5c5cf649af954d9f2b12a Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.398017 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j"] Oct 08 08:22:51 crc kubenswrapper[4958]: W1008 08:22:51.407806 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30fb4dfe_018c_4772_b527_7955cab889da.slice/crio-8a1ea283e4d3d8f91beaa5923b8d0ef32ab5382c9cf97abc175bc58efd8db994 WatchSource:0}: Error finding container 8a1ea283e4d3d8f91beaa5923b8d0ef32ab5382c9cf97abc175bc58efd8db994: Status 404 returned error can't find the container with id 8a1ea283e4d3d8f91beaa5923b8d0ef32ab5382c9cf97abc175bc58efd8db994 Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.502342 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-29nr5"] Oct 08 08:22:51 crc kubenswrapper[4958]: W1008 08:22:51.503511 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe7fd1d_f173_4506_81a2_031a921210f7.slice/crio-b0d4237f3da1e4c1759babe57b415deab7d2ddf50fada5147399738a7f512785 WatchSource:0}: Error finding container b0d4237f3da1e4c1759babe57b415deab7d2ddf50fada5147399738a7f512785: Status 404 returned error can't find the container with id b0d4237f3da1e4c1759babe57b415deab7d2ddf50fada5147399738a7f512785 Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.587465 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57b307b-5ae0-4e2d-8037-e935f756b3ff" path="/var/lib/kubelet/pods/e57b307b-5ae0-4e2d-8037-e935f756b3ff/volumes" Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.659522 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-2vdst"] Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.813111 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" event={"ID":"bccd6a58-4d99-48b2-8be1-a06433166b30","Type":"ContainerStarted","Data":"e478f7f7c25504208e955d29143ed38285021c2295c429bae681c2c7c8f33e64"} Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.814571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" event={"ID":"30fb4dfe-018c-4772-b527-7955cab889da","Type":"ContainerStarted","Data":"8a1ea283e4d3d8f91beaa5923b8d0ef32ab5382c9cf97abc175bc58efd8db994"} Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.816146 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" event={"ID":"1fe7fd1d-f173-4506-81a2-031a921210f7","Type":"ContainerStarted","Data":"b0d4237f3da1e4c1759babe57b415deab7d2ddf50fada5147399738a7f512785"} Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.817498 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" event={"ID":"7b191752-8192-4f54-9b6e-f3027b9e1104","Type":"ContainerStarted","Data":"09627bab6c74890126d685516161631f9a5bfe18d2d5c5cf649af954d9f2b12a"} Oct 08 08:22:51 crc kubenswrapper[4958]: I1008 08:22:51.818895 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" event={"ID":"0151dc13-9ece-481a-9c20-158d3d50af3a","Type":"ContainerStarted","Data":"896548aea176b420535c572944048df77653005ca06f459a16c348a629695d78"} Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.901658 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" event={"ID":"30fb4dfe-018c-4772-b527-7955cab889da","Type":"ContainerStarted","Data":"6aaeaeee31a124cbae90d3ac7c8b2f49bf9fa2cd96a045bf9ca19ff902c6db48"} Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.903673 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" event={"ID":"1fe7fd1d-f173-4506-81a2-031a921210f7","Type":"ContainerStarted","Data":"31b2bcc24a97b56f13f31e659dcc6a5b1ebcb4da1a71e157f3fb91a268a00d61"} Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.903899 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.906359 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" event={"ID":"7b191752-8192-4f54-9b6e-f3027b9e1104","Type":"ContainerStarted","Data":"71339beaf99f01f8ab8cc3d862b6ef6ad3c2a2f796fa2ee7e4eb847fbe0daeed"} Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.907775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" event={"ID":"0151dc13-9ece-481a-9c20-158d3d50af3a","Type":"ContainerStarted","Data":"55fa406d6d4a72171c2f405cd885f0727c51abbe3cb64c17245134efc3cf2c5f"} Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.908526 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.911500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" event={"ID":"bccd6a58-4d99-48b2-8be1-a06433166b30","Type":"ContainerStarted","Data":"966f456597345a5318d9f02c2493858be25ebe9c7bd166318573fd992755991e"} Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.925738 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-6229j" podStartSLOduration=2.89827711 podStartE2EDuration="9.925723387s" podCreationTimestamp="2025-10-08 08:22:50 +0000 UTC" firstStartedPulling="2025-10-08 08:22:51.413425814 +0000 UTC m=+6514.543118415" lastFinishedPulling="2025-10-08 08:22:58.440872091 +0000 UTC m=+6521.570564692" observedRunningTime="2025-10-08 08:22:59.923738264 +0000 UTC m=+6523.053430865" watchObservedRunningTime="2025-10-08 08:22:59.925723387 +0000 UTC m=+6523.055415988" Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.932341 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.967512 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b988d98c-s98xq" podStartSLOduration=2.878243098 podStartE2EDuration="9.967484919s" podCreationTimestamp="2025-10-08 08:22:50 +0000 UTC" firstStartedPulling="2025-10-08 08:22:51.230169278 +0000 UTC m=+6514.359861879" lastFinishedPulling="2025-10-08 08:22:58.319411099 +0000 UTC m=+6521.449103700" observedRunningTime="2025-10-08 08:22:59.953408958 +0000 UTC m=+6523.083101569" watchObservedRunningTime="2025-10-08 08:22:59.967484919 +0000 UTC m=+6523.097177520" Oct 08 08:22:59 crc kubenswrapper[4958]: I1008 08:22:59.993972 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-29nr5" podStartSLOduration=3.055694786 podStartE2EDuration="9.993936086s" podCreationTimestamp="2025-10-08 08:22:50 +0000 UTC" firstStartedPulling="2025-10-08 08:22:51.505881049 +0000 UTC m=+6514.635573650" lastFinishedPulling="2025-10-08 08:22:58.444122349 +0000 UTC m=+6521.573814950" observedRunningTime="2025-10-08 08:22:59.984890961 +0000 UTC m=+6523.114583582" watchObservedRunningTime="2025-10-08 08:22:59.993936086 +0000 UTC m=+6523.123628697" Oct 08 08:23:00 crc kubenswrapper[4958]: I1008 08:23:00.009837 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" podStartSLOduration=3.237775731 podStartE2EDuration="10.009815777s" podCreationTimestamp="2025-10-08 08:22:50 +0000 UTC" firstStartedPulling="2025-10-08 08:22:51.667648553 +0000 UTC m=+6514.797341154" lastFinishedPulling="2025-10-08 08:22:58.439688609 +0000 UTC m=+6521.569381200" observedRunningTime="2025-10-08 08:23:00.004571125 +0000 UTC m=+6523.134263746" watchObservedRunningTime="2025-10-08 08:23:00.009815777 +0000 UTC m=+6523.139508368" Oct 08 08:23:00 crc kubenswrapper[4958]: I1008 08:23:00.053980 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-f29ft" podStartSLOduration=2.8140980190000002 podStartE2EDuration="10.053941082s" podCreationTimestamp="2025-10-08 08:22:50 +0000 UTC" firstStartedPulling="2025-10-08 08:22:51.18707845 +0000 UTC m=+6514.316771051" lastFinishedPulling="2025-10-08 08:22:58.426921513 +0000 UTC m=+6521.556614114" observedRunningTime="2025-10-08 08:23:00.024258998 +0000 UTC m=+6523.153951599" watchObservedRunningTime="2025-10-08 08:23:00.053941082 +0000 UTC m=+6523.183633683" Oct 08 08:23:06 crc kubenswrapper[4958]: I1008 08:23:06.844573 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:23:06 crc kubenswrapper[4958]: I1008 08:23:06.845169 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:23:06 crc kubenswrapper[4958]: I1008 08:23:06.845217 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:23:06 crc kubenswrapper[4958]: I1008 08:23:06.847557 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"871b2ddb043bf24b6fc601ecc25cc84c3454c68175ee8a91d681a6cd3aa935d5"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:23:06 crc kubenswrapper[4958]: I1008 08:23:06.847672 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://871b2ddb043bf24b6fc601ecc25cc84c3454c68175ee8a91d681a6cd3aa935d5" gracePeriod=600 Oct 08 08:23:07 crc kubenswrapper[4958]: I1008 08:23:07.982373 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="871b2ddb043bf24b6fc601ecc25cc84c3454c68175ee8a91d681a6cd3aa935d5" exitCode=0 Oct 08 08:23:07 crc kubenswrapper[4958]: I1008 08:23:07.982448 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"871b2ddb043bf24b6fc601ecc25cc84c3454c68175ee8a91d681a6cd3aa935d5"} Oct 08 08:23:07 crc kubenswrapper[4958]: I1008 08:23:07.982697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342"} Oct 08 08:23:07 crc kubenswrapper[4958]: I1008 08:23:07.982720 4958 scope.go:117] "RemoveContainer" containerID="903e2902330216f463cd5ce5135a1d1e1c107ec5486aef97315b16c675309b3f" Oct 08 08:23:11 crc kubenswrapper[4958]: I1008 08:23:11.136330 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-2vdst" Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.898484 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.899480 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0035371c-8689-4d83-9b95-5869915a2b4f" containerName="openstackclient" containerID="cri-o://c9f69f8a58c1425b7d5e8c4d4f3dbab2ca368ae53a13ac0e68a9204904f95eb7" gracePeriod=2 Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.913567 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.944453 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:13 crc kubenswrapper[4958]: E1008 08:23:13.944826 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0035371c-8689-4d83-9b95-5869915a2b4f" containerName="openstackclient" Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.944838 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0035371c-8689-4d83-9b95-5869915a2b4f" containerName="openstackclient" Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.945068 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0035371c-8689-4d83-9b95-5869915a2b4f" containerName="openstackclient" Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.945705 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.954576 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0035371c-8689-4d83-9b95-5869915a2b4f" podUID="a9ea5644-6f04-4507-bae4-47bcafe7a130" Oct 08 08:23:13 crc kubenswrapper[4958]: I1008 08:23:13.974649 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.014019 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:14 crc kubenswrapper[4958]: E1008 08:23:14.014626 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-2p6s6 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-2p6s6 openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="a9ea5644-6f04-4507-bae4-47bcafe7a130" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.029562 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.041317 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.042576 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.065660 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.073657 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.081282 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a9ea5644-6f04-4507-bae4-47bcafe7a130" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.099157 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.111140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9ea5644-6f04-4507-bae4-47bcafe7a130-openstack-config\") pod \"openstackclient\" (UID: \"a9ea5644-6f04-4507-bae4-47bcafe7a130\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.111234 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6s6\" (UniqueName: \"kubernetes.io/projected/a9ea5644-6f04-4507-bae4-47bcafe7a130-kube-api-access-2p6s6\") pod \"openstackclient\" (UID: \"a9ea5644-6f04-4507-bae4-47bcafe7a130\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.111262 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9ea5644-6f04-4507-bae4-47bcafe7a130-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9ea5644-6f04-4507-bae4-47bcafe7a130\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.111284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ea5644-6f04-4507-bae4-47bcafe7a130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9ea5644-6f04-4507-bae4-47bcafe7a130\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.162371 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.163832 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.171682 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9pl24" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.174939 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.214845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.214899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.214947 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config-secret\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.215200 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tb97\" (UniqueName: \"kubernetes.io/projected/472ac269-e4e6-4378-95e8-a148798a3c5c-kube-api-access-7tb97\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.215527 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p6s6\" (UniqueName: \"kubernetes.io/projected/a9ea5644-6f04-4507-bae4-47bcafe7a130-kube-api-access-2p6s6\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.215545 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9ea5644-6f04-4507-bae4-47bcafe7a130-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.215555 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9ea5644-6f04-4507-bae4-47bcafe7a130-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.215565 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ea5644-6f04-4507-bae4-47bcafe7a130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.316766 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.316815 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.316858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config-secret\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.316911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tb97\" (UniqueName: \"kubernetes.io/projected/472ac269-e4e6-4378-95e8-a148798a3c5c-kube-api-access-7tb97\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.317072 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs859\" (UniqueName: \"kubernetes.io/projected/e3272846-1a11-47c4-a1fd-f297c54dc462-kube-api-access-cs859\") pod \"kube-state-metrics-0\" (UID: \"e3272846-1a11-47c4-a1fd-f297c54dc462\") " pod="openstack/kube-state-metrics-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.317789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.336471 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config-secret\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.340917 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.360819 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tb97\" (UniqueName: \"kubernetes.io/projected/472ac269-e4e6-4378-95e8-a148798a3c5c-kube-api-access-7tb97\") pod \"openstackclient\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.369394 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.420346 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs859\" (UniqueName: \"kubernetes.io/projected/e3272846-1a11-47c4-a1fd-f297c54dc462-kube-api-access-cs859\") pod \"kube-state-metrics-0\" (UID: \"e3272846-1a11-47c4-a1fd-f297c54dc462\") " pod="openstack/kube-state-metrics-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.469393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs859\" (UniqueName: \"kubernetes.io/projected/e3272846-1a11-47c4-a1fd-f297c54dc462-kube-api-access-cs859\") pod \"kube-state-metrics-0\" (UID: \"e3272846-1a11-47c4-a1fd-f297c54dc462\") " pod="openstack/kube-state-metrics-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.491076 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.788827 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.791214 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.797629 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.797659 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-6tmt9" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.797849 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.797992 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.826320 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.940391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0debc9bd-75fc-4150-aece-ac82305e6847-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.940804 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0debc9bd-75fc-4150-aece-ac82305e6847-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.940894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0debc9bd-75fc-4150-aece-ac82305e6847-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.940954 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0debc9bd-75fc-4150-aece-ac82305e6847-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.941017 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0debc9bd-75fc-4150-aece-ac82305e6847-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:14 crc kubenswrapper[4958]: I1008 08:23:14.941036 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brqp\" (UniqueName: \"kubernetes.io/projected/0debc9bd-75fc-4150-aece-ac82305e6847-kube-api-access-7brqp\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.044290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0debc9bd-75fc-4150-aece-ac82305e6847-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.044346 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0debc9bd-75fc-4150-aece-ac82305e6847-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.044374 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0debc9bd-75fc-4150-aece-ac82305e6847-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.044413 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0debc9bd-75fc-4150-aece-ac82305e6847-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.044430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brqp\" (UniqueName: \"kubernetes.io/projected/0debc9bd-75fc-4150-aece-ac82305e6847-kube-api-access-7brqp\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.044472 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0debc9bd-75fc-4150-aece-ac82305e6847-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.045170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0debc9bd-75fc-4150-aece-ac82305e6847-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.054681 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0debc9bd-75fc-4150-aece-ac82305e6847-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.055852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0debc9bd-75fc-4150-aece-ac82305e6847-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.068684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0debc9bd-75fc-4150-aece-ac82305e6847-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.070373 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0debc9bd-75fc-4150-aece-ac82305e6847-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.080660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brqp\" (UniqueName: \"kubernetes.io/projected/0debc9bd-75fc-4150-aece-ac82305e6847-kube-api-access-7brqp\") pod \"alertmanager-metric-storage-0\" (UID: \"0debc9bd-75fc-4150-aece-ac82305e6847\") " pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.093300 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.102515 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a9ea5644-6f04-4507-bae4-47bcafe7a130" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.140542 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.255956 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:15 crc kubenswrapper[4958]: W1008 08:23:15.271494 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod472ac269_e4e6_4378_95e8_a148798a3c5c.slice/crio-bdc95de735fd8a453b7101956f36a6aff10a3e6e6f12a9de1d4ee971b97d4692 WatchSource:0}: Error finding container bdc95de735fd8a453b7101956f36a6aff10a3e6e6f12a9de1d4ee971b97d4692: Status 404 returned error can't find the container with id bdc95de735fd8a453b7101956f36a6aff10a3e6e6f12a9de1d4ee971b97d4692 Oct 08 08:23:15 crc kubenswrapper[4958]: W1008 08:23:15.472509 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3272846_1a11_47c4_a1fd_f297c54dc462.slice/crio-d684424901e0d0043937ab6ae816dec544e0bf0b205815d366296e41a4b91fb3 WatchSource:0}: Error finding container d684424901e0d0043937ab6ae816dec544e0bf0b205815d366296e41a4b91fb3: Status 404 returned error can't find the container with id d684424901e0d0043937ab6ae816dec544e0bf0b205815d366296e41a4b91fb3 Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.537570 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.554303 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.558484 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.563521 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.567326 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.567498 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.563851 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-n4cxm" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.567751 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.567948 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.572013 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.656593 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ea5644-6f04-4507-bae4-47bcafe7a130" path="/var/lib/kubelet/pods/a9ea5644-6f04-4507-bae4-47bcafe7a130/volumes" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.656976 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.685372 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.685458 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scjrl\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-kube-api-access-scjrl\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.685651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.685771 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.685838 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.685891 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.686050 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.686082 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.788472 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.788726 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.788824 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.788949 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.789853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.789963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.790066 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.790150 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scjrl\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-kube-api-access-scjrl\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.803661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.805011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.806728 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.824865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.832390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.834957 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scjrl\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-kube-api-access-scjrl\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.848321 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.859121 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:23:15 crc kubenswrapper[4958]: I1008 08:23:15.859159 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7719de0310df70b681df3e88136895f5f523f2bc10bdd0cb101467cb5a01da0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.013909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.115292 4958 generic.go:334] "Generic (PLEG): container finished" podID="0035371c-8689-4d83-9b95-5869915a2b4f" containerID="c9f69f8a58c1425b7d5e8c4d4f3dbab2ca368ae53a13ac0e68a9204904f95eb7" exitCode=137 Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.133064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e3272846-1a11-47c4-a1fd-f297c54dc462","Type":"ContainerStarted","Data":"d684424901e0d0043937ab6ae816dec544e0bf0b205815d366296e41a4b91fb3"} Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.138571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0debc9bd-75fc-4150-aece-ac82305e6847","Type":"ContainerStarted","Data":"10fe6c27949f86380af26eda3604faa08f5ad08f754bbf345769976798e024a9"} Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.141785 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"472ac269-e4e6-4378-95e8-a148798a3c5c","Type":"ContainerStarted","Data":"bdc95de735fd8a453b7101956f36a6aff10a3e6e6f12a9de1d4ee971b97d4692"} Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.211294 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.302168 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.441250 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config\") pod \"0035371c-8689-4d83-9b95-5869915a2b4f\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.441660 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-combined-ca-bundle\") pod \"0035371c-8689-4d83-9b95-5869915a2b4f\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.441717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8rb6\" (UniqueName: \"kubernetes.io/projected/0035371c-8689-4d83-9b95-5869915a2b4f-kube-api-access-m8rb6\") pod \"0035371c-8689-4d83-9b95-5869915a2b4f\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.441822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config-secret\") pod \"0035371c-8689-4d83-9b95-5869915a2b4f\" (UID: \"0035371c-8689-4d83-9b95-5869915a2b4f\") " Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.453240 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0035371c-8689-4d83-9b95-5869915a2b4f-kube-api-access-m8rb6" (OuterVolumeSpecName: "kube-api-access-m8rb6") pod "0035371c-8689-4d83-9b95-5869915a2b4f" (UID: "0035371c-8689-4d83-9b95-5869915a2b4f"). InnerVolumeSpecName "kube-api-access-m8rb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.472431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0035371c-8689-4d83-9b95-5869915a2b4f" (UID: "0035371c-8689-4d83-9b95-5869915a2b4f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.503836 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0035371c-8689-4d83-9b95-5869915a2b4f" (UID: "0035371c-8689-4d83-9b95-5869915a2b4f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.534567 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0035371c-8689-4d83-9b95-5869915a2b4f" (UID: "0035371c-8689-4d83-9b95-5869915a2b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.543744 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.543767 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0035371c-8689-4d83-9b95-5869915a2b4f-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.543777 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0035371c-8689-4d83-9b95-5869915a2b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.543788 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8rb6\" (UniqueName: \"kubernetes.io/projected/0035371c-8689-4d83-9b95-5869915a2b4f-kube-api-access-m8rb6\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:16 crc kubenswrapper[4958]: I1008 08:23:16.786403 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.155682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerStarted","Data":"eff556a043801ad412cad2ba8a2f3877437ee8c74978dc18f76dae4004a10858"} Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.157989 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"472ac269-e4e6-4378-95e8-a148798a3c5c","Type":"ContainerStarted","Data":"914e822c17b2b1e40dcfe8f3fc13b4f753db32389d03475a3a80d09ccfffd5ab"} Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.160360 4958 scope.go:117] "RemoveContainer" containerID="c9f69f8a58c1425b7d5e8c4d4f3dbab2ca368ae53a13ac0e68a9204904f95eb7" Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.160518 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.166764 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e3272846-1a11-47c4-a1fd-f297c54dc462","Type":"ContainerStarted","Data":"a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee"} Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.166940 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.177025 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.177002036 podStartE2EDuration="3.177002036s" podCreationTimestamp="2025-10-08 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:23:17.171662341 +0000 UTC m=+6540.301354952" watchObservedRunningTime="2025-10-08 08:23:17.177002036 +0000 UTC m=+6540.306694637" Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.193371 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0035371c-8689-4d83-9b95-5869915a2b4f" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.202784 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.771135819 podStartE2EDuration="3.202758625s" podCreationTimestamp="2025-10-08 08:23:14 +0000 UTC" firstStartedPulling="2025-10-08 08:23:15.524152872 +0000 UTC m=+6538.653845473" lastFinishedPulling="2025-10-08 08:23:15.955775678 +0000 UTC m=+6539.085468279" observedRunningTime="2025-10-08 08:23:17.187041979 +0000 UTC m=+6540.316734600" watchObservedRunningTime="2025-10-08 08:23:17.202758625 +0000 UTC m=+6540.332451226" Oct 08 08:23:17 crc kubenswrapper[4958]: I1008 08:23:17.591587 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0035371c-8689-4d83-9b95-5869915a2b4f" path="/var/lib/kubelet/pods/0035371c-8689-4d83-9b95-5869915a2b4f/volumes" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.170879 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hpv5v"] Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.174104 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.190798 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpv5v"] Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.246370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerStarted","Data":"7b996e1ccc261faab9ebeff236985517a12206587c98483d40eb53712393baac"} Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.249218 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0debc9bd-75fc-4150-aece-ac82305e6847","Type":"ContainerStarted","Data":"3aff0f112cbbf2f9ea2ed73dafb6b1bfe18c19589714b72ba4503e78da1c3a03"} Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.294169 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-catalog-content\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.294648 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-utilities\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.294795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x49k\" (UniqueName: \"kubernetes.io/projected/ea4a8712-a9fe-4bb2-aa97-653d2e031936-kube-api-access-8x49k\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.396585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-catalog-content\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.397297 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-utilities\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.397472 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x49k\" (UniqueName: \"kubernetes.io/projected/ea4a8712-a9fe-4bb2-aa97-653d2e031936-kube-api-access-8x49k\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.397674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-utilities\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.397875 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-catalog-content\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.426599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x49k\" (UniqueName: \"kubernetes.io/projected/ea4a8712-a9fe-4bb2-aa97-653d2e031936-kube-api-access-8x49k\") pod \"redhat-marketplace-hpv5v\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:22 crc kubenswrapper[4958]: I1008 08:23:22.550015 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:23 crc kubenswrapper[4958]: I1008 08:23:23.062230 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpv5v"] Oct 08 08:23:23 crc kubenswrapper[4958]: I1008 08:23:23.260357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpv5v" event={"ID":"ea4a8712-a9fe-4bb2-aa97-653d2e031936","Type":"ContainerStarted","Data":"3b2dbe816ebd461d543b993994ee6db89229c4f561ccfd927c3edd066caaaae8"} Oct 08 08:23:24 crc kubenswrapper[4958]: I1008 08:23:24.274349 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerID="ebbb825eb6afabcc683d29d755066e56db096d6feb4bf4219a825181fe3b77f3" exitCode=0 Oct 08 08:23:24 crc kubenswrapper[4958]: I1008 08:23:24.274463 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpv5v" event={"ID":"ea4a8712-a9fe-4bb2-aa97-653d2e031936","Type":"ContainerDied","Data":"ebbb825eb6afabcc683d29d755066e56db096d6feb4bf4219a825181fe3b77f3"} Oct 08 08:23:24 crc kubenswrapper[4958]: I1008 08:23:24.500598 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 08:23:26 crc kubenswrapper[4958]: I1008 08:23:26.301525 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerID="659f72fed4e16610faef41dc30364da6f3182ecd51c7e8ae5cf4ee65c8ffd7ec" exitCode=0 Oct 08 08:23:26 crc kubenswrapper[4958]: I1008 08:23:26.301643 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpv5v" event={"ID":"ea4a8712-a9fe-4bb2-aa97-653d2e031936","Type":"ContainerDied","Data":"659f72fed4e16610faef41dc30364da6f3182ecd51c7e8ae5cf4ee65c8ffd7ec"} Oct 08 08:23:27 crc kubenswrapper[4958]: I1008 08:23:27.326643 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpv5v" event={"ID":"ea4a8712-a9fe-4bb2-aa97-653d2e031936","Type":"ContainerStarted","Data":"e37c44496e711b1ecb209b36f5de5bf1f41061146196ef8b5e695a4810988727"} Oct 08 08:23:27 crc kubenswrapper[4958]: I1008 08:23:27.351659 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hpv5v" podStartSLOduration=2.912916337 podStartE2EDuration="5.351639003s" podCreationTimestamp="2025-10-08 08:23:22 +0000 UTC" firstStartedPulling="2025-10-08 08:23:24.277189111 +0000 UTC m=+6547.406881752" lastFinishedPulling="2025-10-08 08:23:26.715911777 +0000 UTC m=+6549.845604418" observedRunningTime="2025-10-08 08:23:27.350345698 +0000 UTC m=+6550.480038339" watchObservedRunningTime="2025-10-08 08:23:27.351639003 +0000 UTC m=+6550.481331614" Oct 08 08:23:30 crc kubenswrapper[4958]: I1008 08:23:30.393707 4958 generic.go:334] "Generic (PLEG): container finished" podID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerID="7b996e1ccc261faab9ebeff236985517a12206587c98483d40eb53712393baac" exitCode=0 Oct 08 08:23:30 crc kubenswrapper[4958]: I1008 08:23:30.394150 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerDied","Data":"7b996e1ccc261faab9ebeff236985517a12206587c98483d40eb53712393baac"} Oct 08 08:23:31 crc kubenswrapper[4958]: I1008 08:23:31.410773 4958 generic.go:334] "Generic (PLEG): container finished" podID="0debc9bd-75fc-4150-aece-ac82305e6847" containerID="3aff0f112cbbf2f9ea2ed73dafb6b1bfe18c19589714b72ba4503e78da1c3a03" exitCode=0 Oct 08 08:23:31 crc kubenswrapper[4958]: I1008 08:23:31.410834 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0debc9bd-75fc-4150-aece-ac82305e6847","Type":"ContainerDied","Data":"3aff0f112cbbf2f9ea2ed73dafb6b1bfe18c19589714b72ba4503e78da1c3a03"} Oct 08 08:23:32 crc kubenswrapper[4958]: I1008 08:23:32.550501 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:32 crc kubenswrapper[4958]: I1008 08:23:32.551699 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:32 crc kubenswrapper[4958]: I1008 08:23:32.610868 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:33 crc kubenswrapper[4958]: I1008 08:23:33.517546 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:33 crc kubenswrapper[4958]: I1008 08:23:33.567742 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpv5v"] Oct 08 08:23:35 crc kubenswrapper[4958]: I1008 08:23:35.454395 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hpv5v" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="registry-server" containerID="cri-o://e37c44496e711b1ecb209b36f5de5bf1f41061146196ef8b5e695a4810988727" gracePeriod=2 Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.470073 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerID="e37c44496e711b1ecb209b36f5de5bf1f41061146196ef8b5e695a4810988727" exitCode=0 Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.470133 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpv5v" event={"ID":"ea4a8712-a9fe-4bb2-aa97-653d2e031936","Type":"ContainerDied","Data":"e37c44496e711b1ecb209b36f5de5bf1f41061146196ef8b5e695a4810988727"} Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.646203 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.775603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x49k\" (UniqueName: \"kubernetes.io/projected/ea4a8712-a9fe-4bb2-aa97-653d2e031936-kube-api-access-8x49k\") pod \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.776188 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-catalog-content\") pod \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.776447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-utilities\") pod \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\" (UID: \"ea4a8712-a9fe-4bb2-aa97-653d2e031936\") " Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.777268 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-utilities" (OuterVolumeSpecName: "utilities") pod "ea4a8712-a9fe-4bb2-aa97-653d2e031936" (UID: "ea4a8712-a9fe-4bb2-aa97-653d2e031936"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.784151 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4a8712-a9fe-4bb2-aa97-653d2e031936-kube-api-access-8x49k" (OuterVolumeSpecName: "kube-api-access-8x49k") pod "ea4a8712-a9fe-4bb2-aa97-653d2e031936" (UID: "ea4a8712-a9fe-4bb2-aa97-653d2e031936"). InnerVolumeSpecName "kube-api-access-8x49k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.788185 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea4a8712-a9fe-4bb2-aa97-653d2e031936" (UID: "ea4a8712-a9fe-4bb2-aa97-653d2e031936"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.879667 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.879701 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x49k\" (UniqueName: \"kubernetes.io/projected/ea4a8712-a9fe-4bb2-aa97-653d2e031936-kube-api-access-8x49k\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:36 crc kubenswrapper[4958]: I1008 08:23:36.879714 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea4a8712-a9fe-4bb2-aa97-653d2e031936-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.488587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0debc9bd-75fc-4150-aece-ac82305e6847","Type":"ContainerStarted","Data":"2ec67fb454df7c4051e14183e29f1eed4c9930a76be9a1f18aeba7efdb92d1a7"} Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.494114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerStarted","Data":"45603284eeafec67bbc94e36e195024a87af705a1404c18fb43a914656bf6e94"} Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.500390 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpv5v" event={"ID":"ea4a8712-a9fe-4bb2-aa97-653d2e031936","Type":"ContainerDied","Data":"3b2dbe816ebd461d543b993994ee6db89229c4f561ccfd927c3edd066caaaae8"} Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.500677 4958 scope.go:117] "RemoveContainer" containerID="e37c44496e711b1ecb209b36f5de5bf1f41061146196ef8b5e695a4810988727" Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.500485 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpv5v" Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.538506 4958 scope.go:117] "RemoveContainer" containerID="659f72fed4e16610faef41dc30364da6f3182ecd51c7e8ae5cf4ee65c8ffd7ec" Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.568431 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpv5v"] Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.605826 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpv5v"] Oct 08 08:23:37 crc kubenswrapper[4958]: I1008 08:23:37.612670 4958 scope.go:117] "RemoveContainer" containerID="ebbb825eb6afabcc683d29d755066e56db096d6feb4bf4219a825181fe3b77f3" Oct 08 08:23:39 crc kubenswrapper[4958]: I1008 08:23:39.589397 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" path="/var/lib/kubelet/pods/ea4a8712-a9fe-4bb2-aa97-653d2e031936/volumes" Oct 08 08:23:41 crc kubenswrapper[4958]: I1008 08:23:41.558394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0debc9bd-75fc-4150-aece-ac82305e6847","Type":"ContainerStarted","Data":"1bb7f0a13b74d1b8aff849807037897e0bb288768b79acee833a285968e22514"} Oct 08 08:23:41 crc kubenswrapper[4958]: I1008 08:23:41.559857 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:41 crc kubenswrapper[4958]: I1008 08:23:41.564987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Oct 08 08:23:41 crc kubenswrapper[4958]: I1008 08:23:41.672070 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.039321293 podStartE2EDuration="27.672044351s" podCreationTimestamp="2025-10-08 08:23:14 +0000 UTC" firstStartedPulling="2025-10-08 08:23:15.689318775 +0000 UTC m=+6538.819011366" lastFinishedPulling="2025-10-08 08:23:36.322041813 +0000 UTC m=+6559.451734424" observedRunningTime="2025-10-08 08:23:41.601014973 +0000 UTC m=+6564.730707594" watchObservedRunningTime="2025-10-08 08:23:41.672044351 +0000 UTC m=+6564.801736962" Oct 08 08:23:42 crc kubenswrapper[4958]: I1008 08:23:42.573206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerStarted","Data":"e1c2106e9ba2a68357c832eacb7d8df624810f73ab1df4f3f1f9368b00cfbd5e"} Oct 08 08:23:44 crc kubenswrapper[4958]: I1008 08:23:44.764359 4958 scope.go:117] "RemoveContainer" containerID="1028747730d313a0217705c668187fbe2c9d27b09e9aab5adebf5664ef9a4704" Oct 08 08:23:45 crc kubenswrapper[4958]: I1008 08:23:45.247814 4958 scope.go:117] "RemoveContainer" containerID="38ef46d28e2c7bbf5126ea22d5ba102ec01cf0ad470fc3ccc2cbc0cd305de992" Oct 08 08:23:45 crc kubenswrapper[4958]: I1008 08:23:45.322419 4958 scope.go:117] "RemoveContainer" containerID="f12889f5df0988837be76c043df62bd6f119144cea7dc91b99c66f6f2f08835e" Oct 08 08:23:45 crc kubenswrapper[4958]: I1008 08:23:45.616874 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerStarted","Data":"edabca47f655c29f88709b353efc1004501d2c55112a0268fdf02a45ab4986af"} Oct 08 08:23:45 crc kubenswrapper[4958]: I1008 08:23:45.673330 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.134220845 podStartE2EDuration="31.673302999s" podCreationTimestamp="2025-10-08 08:23:14 +0000 UTC" firstStartedPulling="2025-10-08 08:23:16.787500134 +0000 UTC m=+6539.917192735" lastFinishedPulling="2025-10-08 08:23:45.326582248 +0000 UTC m=+6568.456274889" observedRunningTime="2025-10-08 08:23:45.659498615 +0000 UTC m=+6568.789191226" watchObservedRunningTime="2025-10-08 08:23:45.673302999 +0000 UTC m=+6568.802995640" Oct 08 08:23:46 crc kubenswrapper[4958]: I1008 08:23:46.211888 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:46 crc kubenswrapper[4958]: I1008 08:23:46.211998 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:46 crc kubenswrapper[4958]: I1008 08:23:46.215826 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:46 crc kubenswrapper[4958]: I1008 08:23:46.636741 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.042353 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-sp82f"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.053703 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fnsjn"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.063346 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-sp82f"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.070778 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fnsjn"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.079167 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hvmsz"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.086014 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hvmsz"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.370609 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.370988 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" containerName="openstackclient" containerID="cri-o://914e822c17b2b1e40dcfe8f3fc13b4f753db32389d03475a3a80d09ccfffd5ab" gracePeriod=2 Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.382277 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.414764 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:48 crc kubenswrapper[4958]: E1008 08:23:48.415144 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="registry-server" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.415163 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="registry-server" Oct 08 08:23:48 crc kubenswrapper[4958]: E1008 08:23:48.415179 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="extract-utilities" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.415187 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="extract-utilities" Oct 08 08:23:48 crc kubenswrapper[4958]: E1008 08:23:48.415226 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" containerName="openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.415232 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" containerName="openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: E1008 08:23:48.415244 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="extract-content" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.415250 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="extract-content" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.415429 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4a8712-a9fe-4bb2-aa97-653d2e031936" containerName="registry-server" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.415455 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" containerName="openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.416115 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.462867 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.476872 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="472ac269-e4e6-4378-95e8-a148798a3c5c" podUID="2f04c432-37a2-4652-b37e-3f9c58b67b89" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.505874 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f04c432-37a2-4652-b37e-3f9c58b67b89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.505998 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f04c432-37a2-4652-b37e-3f9c58b67b89-openstack-config\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.506032 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjcv\" (UniqueName: \"kubernetes.io/projected/2f04c432-37a2-4652-b37e-3f9c58b67b89-kube-api-access-rtjcv\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.506109 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f04c432-37a2-4652-b37e-3f9c58b67b89-openstack-config-secret\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.608272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f04c432-37a2-4652-b37e-3f9c58b67b89-openstack-config-secret\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.608389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f04c432-37a2-4652-b37e-3f9c58b67b89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.608449 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f04c432-37a2-4652-b37e-3f9c58b67b89-openstack-config\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.608477 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjcv\" (UniqueName: \"kubernetes.io/projected/2f04c432-37a2-4652-b37e-3f9c58b67b89-kube-api-access-rtjcv\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.610498 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f04c432-37a2-4652-b37e-3f9c58b67b89-openstack-config\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.622884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f04c432-37a2-4652-b37e-3f9c58b67b89-openstack-config-secret\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.623289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f04c432-37a2-4652-b37e-3f9c58b67b89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.628454 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjcv\" (UniqueName: \"kubernetes.io/projected/2f04c432-37a2-4652-b37e-3f9c58b67b89-kube-api-access-rtjcv\") pod \"openstackclient\" (UID: \"2f04c432-37a2-4652-b37e-3f9c58b67b89\") " pod="openstack/openstackclient" Oct 08 08:23:48 crc kubenswrapper[4958]: I1008 08:23:48.772412 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.308923 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.593733 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b2a2e1a-ed98-4e3d-be11-cd7b668abc72" path="/var/lib/kubelet/pods/4b2a2e1a-ed98-4e3d-be11-cd7b668abc72/volumes" Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.595195 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e46d65ce-b975-425a-bea6-802c40beed1b" path="/var/lib/kubelet/pods/e46d65ce-b975-425a-bea6-802c40beed1b/volumes" Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.596435 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73ec4fb-e99a-4974-92fb-86a6339d686c" path="/var/lib/kubelet/pods/e73ec4fb-e99a-4974-92fb-86a6339d686c/volumes" Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.676959 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2f04c432-37a2-4652-b37e-3f9c58b67b89","Type":"ContainerStarted","Data":"77346b245fd6a61791813661d611e57975e0cbda05c4fcad0596053a4a713e34"} Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.677017 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2f04c432-37a2-4652-b37e-3f9c58b67b89","Type":"ContainerStarted","Data":"d710de7d21eb03be3676f7ee8784f0c6deadd454a1a8a48d47a04cc0a7572e2b"} Oct 08 08:23:49 crc kubenswrapper[4958]: I1008 08:23:49.694991 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.6949665120000001 podStartE2EDuration="1.694966512s" podCreationTimestamp="2025-10-08 08:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:23:49.688462085 +0000 UTC m=+6572.818154686" watchObservedRunningTime="2025-10-08 08:23:49.694966512 +0000 UTC m=+6572.824659123" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.302959 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.303422 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="prometheus" containerID="cri-o://45603284eeafec67bbc94e36e195024a87af705a1404c18fb43a914656bf6e94" gracePeriod=600 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.303513 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="thanos-sidecar" containerID="cri-o://edabca47f655c29f88709b353efc1004501d2c55112a0268fdf02a45ab4986af" gracePeriod=600 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.303539 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="config-reloader" containerID="cri-o://e1c2106e9ba2a68357c832eacb7d8df624810f73ab1df4f3f1f9368b00cfbd5e" gracePeriod=600 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.706012 4958 generic.go:334] "Generic (PLEG): container finished" podID="472ac269-e4e6-4378-95e8-a148798a3c5c" containerID="914e822c17b2b1e40dcfe8f3fc13b4f753db32389d03475a3a80d09ccfffd5ab" exitCode=137 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.706718 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc95de735fd8a453b7101956f36a6aff10a3e6e6f12a9de1d4ee971b97d4692" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.712702 4958 generic.go:334] "Generic (PLEG): container finished" podID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerID="edabca47f655c29f88709b353efc1004501d2c55112a0268fdf02a45ab4986af" exitCode=0 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.712731 4958 generic.go:334] "Generic (PLEG): container finished" podID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerID="e1c2106e9ba2a68357c832eacb7d8df624810f73ab1df4f3f1f9368b00cfbd5e" exitCode=0 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.712740 4958 generic.go:334] "Generic (PLEG): container finished" podID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerID="45603284eeafec67bbc94e36e195024a87af705a1404c18fb43a914656bf6e94" exitCode=0 Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.712797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerDied","Data":"edabca47f655c29f88709b353efc1004501d2c55112a0268fdf02a45ab4986af"} Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.712846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerDied","Data":"e1c2106e9ba2a68357c832eacb7d8df624810f73ab1df4f3f1f9368b00cfbd5e"} Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.712884 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerDied","Data":"45603284eeafec67bbc94e36e195024a87af705a1404c18fb43a914656bf6e94"} Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.764417 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.764771 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.772337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.777461 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.777627 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.782025 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862074 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tb97\" (UniqueName: \"kubernetes.io/projected/472ac269-e4e6-4378-95e8-a148798a3c5c-kube-api-access-7tb97\") pod \"472ac269-e4e6-4378-95e8-a148798a3c5c\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862190 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-combined-ca-bundle\") pod \"472ac269-e4e6-4378-95e8-a148798a3c5c\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862266 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config-secret\") pod \"472ac269-e4e6-4378-95e8-a148798a3c5c\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862367 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config\") pod \"472ac269-e4e6-4378-95e8-a148798a3c5c\" (UID: \"472ac269-e4e6-4378-95e8-a148798a3c5c\") " Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862608 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-scripts\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862670 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862746 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-log-httpd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-run-httpd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862853 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjjd\" (UniqueName: \"kubernetes.io/projected/a5694e7b-fde4-4592-810a-36d2f12db19e-kube-api-access-cbjjd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.862892 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-config-data\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.869933 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472ac269-e4e6-4378-95e8-a148798a3c5c-kube-api-access-7tb97" (OuterVolumeSpecName: "kube-api-access-7tb97") pod "472ac269-e4e6-4378-95e8-a148798a3c5c" (UID: "472ac269-e4e6-4378-95e8-a148798a3c5c"). InnerVolumeSpecName "kube-api-access-7tb97". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.897479 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "472ac269-e4e6-4378-95e8-a148798a3c5c" (UID: "472ac269-e4e6-4378-95e8-a148798a3c5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.898305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "472ac269-e4e6-4378-95e8-a148798a3c5c" (UID: "472ac269-e4e6-4378-95e8-a148798a3c5c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.925868 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "472ac269-e4e6-4378-95e8-a148798a3c5c" (UID: "472ac269-e4e6-4378-95e8-a148798a3c5c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966212 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-config-data\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966334 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-scripts\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966475 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-log-httpd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-run-httpd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966559 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966606 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjjd\" (UniqueName: \"kubernetes.io/projected/a5694e7b-fde4-4592-810a-36d2f12db19e-kube-api-access-cbjjd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966699 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tb97\" (UniqueName: \"kubernetes.io/projected/472ac269-e4e6-4378-95e8-a148798a3c5c-kube-api-access-7tb97\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966721 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966735 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.966747 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/472ac269-e4e6-4378-95e8-a148798a3c5c-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.969465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-run-httpd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.969508 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-log-httpd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.969789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-scripts\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.971335 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.974680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.975235 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-config-data\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:50 crc kubenswrapper[4958]: I1008 08:23:50.992422 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjjd\" (UniqueName: \"kubernetes.io/projected/a5694e7b-fde4-4592-810a-36d2f12db19e-kube-api-access-cbjjd\") pod \"ceilometer-0\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " pod="openstack/ceilometer-0" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.102618 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.276265 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.380616 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-web-config\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.380724 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scjrl\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-kube-api-access-scjrl\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.380783 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.380902 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-thanos-prometheus-http-client-file\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.380973 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-tls-assets\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.381653 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.381683 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-prometheus-metric-storage-rulefiles-0\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.382360 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.382688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config-out\") pod \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\" (UID: \"a00e9acc-c16c-4ed4-8476-b88a9b0a8211\") " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.383329 4958 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.385745 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config" (OuterVolumeSpecName: "config") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.385901 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.386253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.386561 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-kube-api-access-scjrl" (OuterVolumeSpecName: "kube-api-access-scjrl") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "kube-api-access-scjrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.387866 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config-out" (OuterVolumeSpecName: "config-out") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.420100 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.441444 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-web-config" (OuterVolumeSpecName: "web-config") pod "a00e9acc-c16c-4ed4-8476-b88a9b0a8211" (UID: "a00e9acc-c16c-4ed4-8476-b88a9b0a8211"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485482 4958 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-web-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485521 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scjrl\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-kube-api-access-scjrl\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485535 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485548 4958 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485560 4958 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-tls-assets\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485604 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") on node \"crc\" " Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.485620 4958 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a00e9acc-c16c-4ed4-8476-b88a9b0a8211-config-out\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.520767 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.520934 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0") on node "crc" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.596412 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") on node \"crc\" DevicePath \"\"" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.611571 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472ac269-e4e6-4378-95e8-a148798a3c5c" path="/var/lib/kubelet/pods/472ac269-e4e6-4378-95e8-a148798a3c5c/volumes" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.622234 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.722350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerStarted","Data":"f1f05bfb84228b9bf6f11a3cb41efb1c8ebce40ac093466d6c4251636c51cdf5"} Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.725063 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.725815 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.726167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a00e9acc-c16c-4ed4-8476-b88a9b0a8211","Type":"ContainerDied","Data":"eff556a043801ad412cad2ba8a2f3877437ee8c74978dc18f76dae4004a10858"} Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.726221 4958 scope.go:117] "RemoveContainer" containerID="edabca47f655c29f88709b353efc1004501d2c55112a0268fdf02a45ab4986af" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.780273 4958 scope.go:117] "RemoveContainer" containerID="e1c2106e9ba2a68357c832eacb7d8df624810f73ab1df4f3f1f9368b00cfbd5e" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.800117 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.803733 4958 scope.go:117] "RemoveContainer" containerID="45603284eeafec67bbc94e36e195024a87af705a1404c18fb43a914656bf6e94" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.815206 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.823664 4958 scope.go:117] "RemoveContainer" containerID="7b996e1ccc261faab9ebeff236985517a12206587c98483d40eb53712393baac" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.835409 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:51 crc kubenswrapper[4958]: E1008 08:23:51.835845 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="thanos-sidecar" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.835858 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="thanos-sidecar" Oct 08 08:23:51 crc kubenswrapper[4958]: E1008 08:23:51.835881 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="config-reloader" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.835887 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="config-reloader" Oct 08 08:23:51 crc kubenswrapper[4958]: E1008 08:23:51.835909 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="prometheus" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.835915 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="prometheus" Oct 08 08:23:51 crc kubenswrapper[4958]: E1008 08:23:51.835928 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="init-config-reloader" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.835935 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="init-config-reloader" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.836128 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="config-reloader" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.836142 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="thanos-sidecar" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.836159 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="prometheus" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.837940 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.840114 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.840146 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-n4cxm" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.840497 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.840702 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.840844 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.841095 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.844575 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:51 crc kubenswrapper[4958]: I1008 08:23:51.846508 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.008549 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.008928 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhkn\" (UniqueName: \"kubernetes.io/projected/820b139e-c192-4613-b18d-64a3ec276dae-kube-api-access-wrhkn\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009146 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009353 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/820b139e-c192-4613-b18d-64a3ec276dae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009625 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-config\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/820b139e-c192-4613-b18d-64a3ec276dae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.009803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/820b139e-c192-4613-b18d-64a3ec276dae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.112888 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.112926 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/820b139e-c192-4613-b18d-64a3ec276dae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113156 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113190 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-config\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113252 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/820b139e-c192-4613-b18d-64a3ec276dae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113285 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/820b139e-c192-4613-b18d-64a3ec276dae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113311 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhkn\" (UniqueName: \"kubernetes.io/projected/820b139e-c192-4613-b18d-64a3ec276dae-kube-api-access-wrhkn\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.113445 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.116105 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.116130 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/820b139e-c192-4613-b18d-64a3ec276dae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.118653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.119798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.122575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-config\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.123262 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/820b139e-c192-4613-b18d-64a3ec276dae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.123757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/820b139e-c192-4613-b18d-64a3ec276dae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.125097 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.125133 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7719de0310df70b681df3e88136895f5f523f2bc10bdd0cb101467cb5a01da0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.125201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.131924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820b139e-c192-4613-b18d-64a3ec276dae-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.146933 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhkn\" (UniqueName: \"kubernetes.io/projected/820b139e-c192-4613-b18d-64a3ec276dae-kube-api-access-wrhkn\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.195191 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a29be908-7e9b-46dc-ae9c-ee98dd6877e0\") pod \"prometheus-metric-storage-0\" (UID: \"820b139e-c192-4613-b18d-64a3ec276dae\") " pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.497436 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 08 08:23:52 crc kubenswrapper[4958]: I1008 08:23:52.735681 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerStarted","Data":"919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff"} Oct 08 08:23:53 crc kubenswrapper[4958]: I1008 08:23:53.078321 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 08 08:23:53 crc kubenswrapper[4958]: W1008 08:23:53.080389 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod820b139e_c192_4613_b18d_64a3ec276dae.slice/crio-135d6f45a3a3325789bcaf9d9f81e84089866d2b6b023a86c2fb554bb4a03cb5 WatchSource:0}: Error finding container 135d6f45a3a3325789bcaf9d9f81e84089866d2b6b023a86c2fb554bb4a03cb5: Status 404 returned error can't find the container with id 135d6f45a3a3325789bcaf9d9f81e84089866d2b6b023a86c2fb554bb4a03cb5 Oct 08 08:23:53 crc kubenswrapper[4958]: I1008 08:23:53.588646 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" path="/var/lib/kubelet/pods/a00e9acc-c16c-4ed4-8476-b88a9b0a8211/volumes" Oct 08 08:23:53 crc kubenswrapper[4958]: I1008 08:23:53.751755 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"820b139e-c192-4613-b18d-64a3ec276dae","Type":"ContainerStarted","Data":"135d6f45a3a3325789bcaf9d9f81e84089866d2b6b023a86c2fb554bb4a03cb5"} Oct 08 08:23:53 crc kubenswrapper[4958]: I1008 08:23:53.753068 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerStarted","Data":"a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719"} Oct 08 08:23:54 crc kubenswrapper[4958]: I1008 08:23:54.214501 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="a00e9acc-c16c-4ed4-8476-b88a9b0a8211" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.151:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 08 08:23:54 crc kubenswrapper[4958]: I1008 08:23:54.765295 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerStarted","Data":"fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0"} Oct 08 08:23:56 crc kubenswrapper[4958]: I1008 08:23:56.796711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerStarted","Data":"ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e"} Oct 08 08:23:56 crc kubenswrapper[4958]: I1008 08:23:56.797107 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 08:23:56 crc kubenswrapper[4958]: I1008 08:23:56.798217 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"820b139e-c192-4613-b18d-64a3ec276dae","Type":"ContainerStarted","Data":"5ee4f8af4ae13ddf5089108bc3c6f1e35f863c37882cc350535c34857db45a0b"} Oct 08 08:23:56 crc kubenswrapper[4958]: I1008 08:23:56.834626 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.953440928 podStartE2EDuration="6.834598957s" podCreationTimestamp="2025-10-08 08:23:50 +0000 UTC" firstStartedPulling="2025-10-08 08:23:51.609064747 +0000 UTC m=+6574.738757348" lastFinishedPulling="2025-10-08 08:23:55.490222766 +0000 UTC m=+6578.619915377" observedRunningTime="2025-10-08 08:23:56.824500053 +0000 UTC m=+6579.954192694" watchObservedRunningTime="2025-10-08 08:23:56.834598957 +0000 UTC m=+6579.964291588" Oct 08 08:23:58 crc kubenswrapper[4958]: I1008 08:23:58.048936 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1b00-account-create-c2wmn"] Oct 08 08:23:58 crc kubenswrapper[4958]: I1008 08:23:58.063546 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cf8f-account-create-cqsdt"] Oct 08 08:23:58 crc kubenswrapper[4958]: I1008 08:23:58.073766 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cf8f-account-create-cqsdt"] Oct 08 08:23:58 crc kubenswrapper[4958]: I1008 08:23:58.086520 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1b00-account-create-c2wmn"] Oct 08 08:23:59 crc kubenswrapper[4958]: I1008 08:23:59.037449 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-aa36-account-create-rkzt5"] Oct 08 08:23:59 crc kubenswrapper[4958]: I1008 08:23:59.048748 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-aa36-account-create-rkzt5"] Oct 08 08:23:59 crc kubenswrapper[4958]: I1008 08:23:59.605821 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0195dd3c-dfc6-46a2-ba6f-54de2167410d" path="/var/lib/kubelet/pods/0195dd3c-dfc6-46a2-ba6f-54de2167410d/volumes" Oct 08 08:23:59 crc kubenswrapper[4958]: I1008 08:23:59.607096 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48aeb23a-d59a-4ebe-9abf-82760945f25c" path="/var/lib/kubelet/pods/48aeb23a-d59a-4ebe-9abf-82760945f25c/volumes" Oct 08 08:23:59 crc kubenswrapper[4958]: I1008 08:23:59.607709 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd0af62-9469-44e9-a330-83bcaa079b79" path="/var/lib/kubelet/pods/4cd0af62-9469-44e9-a330-83bcaa079b79/volumes" Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.158142 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-hqsx7"] Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.161088 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.176650 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-hqsx7"] Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.304548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jft8c\" (UniqueName: \"kubernetes.io/projected/80e9bdef-307a-4761-953f-16b2db2a4496-kube-api-access-jft8c\") pod \"aodh-db-create-hqsx7\" (UID: \"80e9bdef-307a-4761-953f-16b2db2a4496\") " pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.407281 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jft8c\" (UniqueName: \"kubernetes.io/projected/80e9bdef-307a-4761-953f-16b2db2a4496-kube-api-access-jft8c\") pod \"aodh-db-create-hqsx7\" (UID: \"80e9bdef-307a-4761-953f-16b2db2a4496\") " pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.432395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jft8c\" (UniqueName: \"kubernetes.io/projected/80e9bdef-307a-4761-953f-16b2db2a4496-kube-api-access-jft8c\") pod \"aodh-db-create-hqsx7\" (UID: \"80e9bdef-307a-4761-953f-16b2db2a4496\") " pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:00 crc kubenswrapper[4958]: I1008 08:24:00.482994 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:01 crc kubenswrapper[4958]: I1008 08:24:01.031540 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-hqsx7"] Oct 08 08:24:01 crc kubenswrapper[4958]: I1008 08:24:01.855792 4958 generic.go:334] "Generic (PLEG): container finished" podID="80e9bdef-307a-4761-953f-16b2db2a4496" containerID="47c147e6b9954b488dbf4cecb978a53db3fe9fbda61df673e5dcd3713d4ab2eb" exitCode=0 Oct 08 08:24:01 crc kubenswrapper[4958]: I1008 08:24:01.855880 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-hqsx7" event={"ID":"80e9bdef-307a-4761-953f-16b2db2a4496","Type":"ContainerDied","Data":"47c147e6b9954b488dbf4cecb978a53db3fe9fbda61df673e5dcd3713d4ab2eb"} Oct 08 08:24:01 crc kubenswrapper[4958]: I1008 08:24:01.856164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-hqsx7" event={"ID":"80e9bdef-307a-4761-953f-16b2db2a4496","Type":"ContainerStarted","Data":"b858ec640b35c1a80bb3c5c630e884cf167a0631f3577490cd7ff86aa317a9bd"} Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.384542 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.489443 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jft8c\" (UniqueName: \"kubernetes.io/projected/80e9bdef-307a-4761-953f-16b2db2a4496-kube-api-access-jft8c\") pod \"80e9bdef-307a-4761-953f-16b2db2a4496\" (UID: \"80e9bdef-307a-4761-953f-16b2db2a4496\") " Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.499507 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e9bdef-307a-4761-953f-16b2db2a4496-kube-api-access-jft8c" (OuterVolumeSpecName: "kube-api-access-jft8c") pod "80e9bdef-307a-4761-953f-16b2db2a4496" (UID: "80e9bdef-307a-4761-953f-16b2db2a4496"). InnerVolumeSpecName "kube-api-access-jft8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.592654 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jft8c\" (UniqueName: \"kubernetes.io/projected/80e9bdef-307a-4761-953f-16b2db2a4496-kube-api-access-jft8c\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.887211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-hqsx7" event={"ID":"80e9bdef-307a-4761-953f-16b2db2a4496","Type":"ContainerDied","Data":"b858ec640b35c1a80bb3c5c630e884cf167a0631f3577490cd7ff86aa317a9bd"} Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.887886 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b858ec640b35c1a80bb3c5c630e884cf167a0631f3577490cd7ff86aa317a9bd" Oct 08 08:24:03 crc kubenswrapper[4958]: I1008 08:24:03.887285 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-hqsx7" Oct 08 08:24:04 crc kubenswrapper[4958]: I1008 08:24:04.902081 4958 generic.go:334] "Generic (PLEG): container finished" podID="820b139e-c192-4613-b18d-64a3ec276dae" containerID="5ee4f8af4ae13ddf5089108bc3c6f1e35f863c37882cc350535c34857db45a0b" exitCode=0 Oct 08 08:24:04 crc kubenswrapper[4958]: I1008 08:24:04.902143 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"820b139e-c192-4613-b18d-64a3ec276dae","Type":"ContainerDied","Data":"5ee4f8af4ae13ddf5089108bc3c6f1e35f863c37882cc350535c34857db45a0b"} Oct 08 08:24:05 crc kubenswrapper[4958]: I1008 08:24:05.916587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"820b139e-c192-4613-b18d-64a3ec276dae","Type":"ContainerStarted","Data":"84ee4f6d552eaf81db8b844645177cb0b82a5b4ceff09cc60663df11e0206e20"} Oct 08 08:24:09 crc kubenswrapper[4958]: I1008 08:24:09.070573 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pd8b2"] Oct 08 08:24:09 crc kubenswrapper[4958]: I1008 08:24:09.080034 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pd8b2"] Oct 08 08:24:09 crc kubenswrapper[4958]: I1008 08:24:09.593735 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d74c2a5c-adb7-4296-8605-a6e45a39c494" path="/var/lib/kubelet/pods/d74c2a5c-adb7-4296-8605-a6e45a39c494/volumes" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.287758 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-fe9d-account-create-g6jr9"] Oct 08 08:24:10 crc kubenswrapper[4958]: E1008 08:24:10.288255 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e9bdef-307a-4761-953f-16b2db2a4496" containerName="mariadb-database-create" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.288268 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e9bdef-307a-4761-953f-16b2db2a4496" containerName="mariadb-database-create" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.289323 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e9bdef-307a-4761-953f-16b2db2a4496" containerName="mariadb-database-create" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.290170 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.292360 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.298771 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fe9d-account-create-g6jr9"] Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.393900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qbcd\" (UniqueName: \"kubernetes.io/projected/3ad967a9-d897-4751-abbd-28a1c396efea-kube-api-access-9qbcd\") pod \"aodh-fe9d-account-create-g6jr9\" (UID: \"3ad967a9-d897-4751-abbd-28a1c396efea\") " pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.496233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qbcd\" (UniqueName: \"kubernetes.io/projected/3ad967a9-d897-4751-abbd-28a1c396efea-kube-api-access-9qbcd\") pod \"aodh-fe9d-account-create-g6jr9\" (UID: \"3ad967a9-d897-4751-abbd-28a1c396efea\") " pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.516687 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qbcd\" (UniqueName: \"kubernetes.io/projected/3ad967a9-d897-4751-abbd-28a1c396efea-kube-api-access-9qbcd\") pod \"aodh-fe9d-account-create-g6jr9\" (UID: \"3ad967a9-d897-4751-abbd-28a1c396efea\") " pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.664081 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.988137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"820b139e-c192-4613-b18d-64a3ec276dae","Type":"ContainerStarted","Data":"5be35f63d8196f7b95263b96edceb11a5c6d01a488b7bfa817dc4b29b1b37427"} Oct 08 08:24:10 crc kubenswrapper[4958]: I1008 08:24:10.988729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"820b139e-c192-4613-b18d-64a3ec276dae","Type":"ContainerStarted","Data":"012ff94870a83431b1d1ef9d6e2d6d3ee6b791783f191d108cc0a0ce3dd6d9dc"} Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.026319 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.026296111 podStartE2EDuration="20.026296111s" podCreationTimestamp="2025-10-08 08:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:24:11.021766088 +0000 UTC m=+6594.151458719" watchObservedRunningTime="2025-10-08 08:24:11.026296111 +0000 UTC m=+6594.155988752" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.251095 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fe9d-account-create-g6jr9"] Oct 08 08:24:11 crc kubenswrapper[4958]: W1008 08:24:11.262557 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad967a9_d897_4751_abbd_28a1c396efea.slice/crio-8f9cd8f41706e82eca696a0d30f25325022f12bb36e89b9dee36dc593352d367 WatchSource:0}: Error finding container 8f9cd8f41706e82eca696a0d30f25325022f12bb36e89b9dee36dc593352d367: Status 404 returned error can't find the container with id 8f9cd8f41706e82eca696a0d30f25325022f12bb36e89b9dee36dc593352d367 Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.599293 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5xfsg"] Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.602016 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5xfsg"] Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.602215 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.727499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qj7\" (UniqueName: \"kubernetes.io/projected/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-kube-api-access-k6qj7\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.727645 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-catalog-content\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.727834 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-utilities\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.830273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6qj7\" (UniqueName: \"kubernetes.io/projected/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-kube-api-access-k6qj7\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.830361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-catalog-content\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.830467 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-utilities\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.831243 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-utilities\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.831412 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-catalog-content\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.867225 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6qj7\" (UniqueName: \"kubernetes.io/projected/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-kube-api-access-k6qj7\") pod \"certified-operators-5xfsg\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:11 crc kubenswrapper[4958]: I1008 08:24:11.931346 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:12 crc kubenswrapper[4958]: I1008 08:24:12.007026 4958 generic.go:334] "Generic (PLEG): container finished" podID="3ad967a9-d897-4751-abbd-28a1c396efea" containerID="c484b4f80e35e503661d5dbaffd41bcdb3db95a9f73d2169a8e74860449a266b" exitCode=0 Oct 08 08:24:12 crc kubenswrapper[4958]: I1008 08:24:12.007586 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe9d-account-create-g6jr9" event={"ID":"3ad967a9-d897-4751-abbd-28a1c396efea","Type":"ContainerDied","Data":"c484b4f80e35e503661d5dbaffd41bcdb3db95a9f73d2169a8e74860449a266b"} Oct 08 08:24:12 crc kubenswrapper[4958]: I1008 08:24:12.007624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe9d-account-create-g6jr9" event={"ID":"3ad967a9-d897-4751-abbd-28a1c396efea","Type":"ContainerStarted","Data":"8f9cd8f41706e82eca696a0d30f25325022f12bb36e89b9dee36dc593352d367"} Oct 08 08:24:12 crc kubenswrapper[4958]: I1008 08:24:12.498762 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 08 08:24:12 crc kubenswrapper[4958]: I1008 08:24:12.541365 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5xfsg"] Oct 08 08:24:12 crc kubenswrapper[4958]: W1008 08:24:12.550358 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bf7ca23_42bc_4d3b_be18_c65ac4745ee9.slice/crio-ff3aa9d067dcf962ff416bfdf3c069b4ce1474af9b18a3d010f8f5284ca91c92 WatchSource:0}: Error finding container ff3aa9d067dcf962ff416bfdf3c069b4ce1474af9b18a3d010f8f5284ca91c92: Status 404 returned error can't find the container with id ff3aa9d067dcf962ff416bfdf3c069b4ce1474af9b18a3d010f8f5284ca91c92 Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.022433 4958 generic.go:334] "Generic (PLEG): container finished" podID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerID="333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29" exitCode=0 Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.022658 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerDied","Data":"333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29"} Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.022764 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerStarted","Data":"ff3aa9d067dcf962ff416bfdf3c069b4ce1474af9b18a3d010f8f5284ca91c92"} Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.485535 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.592459 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qbcd\" (UniqueName: \"kubernetes.io/projected/3ad967a9-d897-4751-abbd-28a1c396efea-kube-api-access-9qbcd\") pod \"3ad967a9-d897-4751-abbd-28a1c396efea\" (UID: \"3ad967a9-d897-4751-abbd-28a1c396efea\") " Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.601806 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad967a9-d897-4751-abbd-28a1c396efea-kube-api-access-9qbcd" (OuterVolumeSpecName: "kube-api-access-9qbcd") pod "3ad967a9-d897-4751-abbd-28a1c396efea" (UID: "3ad967a9-d897-4751-abbd-28a1c396efea"). InnerVolumeSpecName "kube-api-access-9qbcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:13 crc kubenswrapper[4958]: I1008 08:24:13.695036 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qbcd\" (UniqueName: \"kubernetes.io/projected/3ad967a9-d897-4751-abbd-28a1c396efea-kube-api-access-9qbcd\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:14 crc kubenswrapper[4958]: I1008 08:24:14.041851 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe9d-account-create-g6jr9" event={"ID":"3ad967a9-d897-4751-abbd-28a1c396efea","Type":"ContainerDied","Data":"8f9cd8f41706e82eca696a0d30f25325022f12bb36e89b9dee36dc593352d367"} Oct 08 08:24:14 crc kubenswrapper[4958]: I1008 08:24:14.041916 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f9cd8f41706e82eca696a0d30f25325022f12bb36e89b9dee36dc593352d367" Oct 08 08:24:14 crc kubenswrapper[4958]: I1008 08:24:14.042089 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe9d-account-create-g6jr9" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.064924 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerStarted","Data":"bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b"} Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.886224 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-s5dh6"] Oct 08 08:24:15 crc kubenswrapper[4958]: E1008 08:24:15.887672 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad967a9-d897-4751-abbd-28a1c396efea" containerName="mariadb-account-create" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.887776 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad967a9-d897-4751-abbd-28a1c396efea" containerName="mariadb-account-create" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.888146 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad967a9-d897-4751-abbd-28a1c396efea" containerName="mariadb-account-create" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.889313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.891866 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.892245 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7qbsb" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.893288 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.899917 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s5dh6"] Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.956661 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-combined-ca-bundle\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.956719 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-scripts\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.956764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-config-data\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:15 crc kubenswrapper[4958]: I1008 08:24:15.956823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557fl\" (UniqueName: \"kubernetes.io/projected/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-kube-api-access-557fl\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.059390 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-combined-ca-bundle\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.059444 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-scripts\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.059480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-config-data\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.059521 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-557fl\" (UniqueName: \"kubernetes.io/projected/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-kube-api-access-557fl\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.067126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-combined-ca-bundle\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.067316 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-scripts\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.076423 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-config-data\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.080074 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-557fl\" (UniqueName: \"kubernetes.io/projected/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-kube-api-access-557fl\") pod \"aodh-db-sync-s5dh6\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.081758 4958 generic.go:334] "Generic (PLEG): container finished" podID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerID="bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b" exitCode=0 Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.081891 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerDied","Data":"bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b"} Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.275526 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:16 crc kubenswrapper[4958]: I1008 08:24:16.818222 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-s5dh6"] Oct 08 08:24:16 crc kubenswrapper[4958]: W1008 08:24:16.824605 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2a7258_4c60_4ec5_ab90_5bfbfbe2bc85.slice/crio-6d95f6750e9da599bc50aef998b6ab6e8f1da6f8299f7f52d47741bebf5fb11a WatchSource:0}: Error finding container 6d95f6750e9da599bc50aef998b6ab6e8f1da6f8299f7f52d47741bebf5fb11a: Status 404 returned error can't find the container with id 6d95f6750e9da599bc50aef998b6ab6e8f1da6f8299f7f52d47741bebf5fb11a Oct 08 08:24:17 crc kubenswrapper[4958]: I1008 08:24:17.108253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerStarted","Data":"55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc"} Oct 08 08:24:17 crc kubenswrapper[4958]: I1008 08:24:17.109997 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s5dh6" event={"ID":"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85","Type":"ContainerStarted","Data":"6d95f6750e9da599bc50aef998b6ab6e8f1da6f8299f7f52d47741bebf5fb11a"} Oct 08 08:24:18 crc kubenswrapper[4958]: I1008 08:24:18.145087 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5xfsg" podStartSLOduration=3.547322204 podStartE2EDuration="7.145061938s" podCreationTimestamp="2025-10-08 08:24:11 +0000 UTC" firstStartedPulling="2025-10-08 08:24:13.026513364 +0000 UTC m=+6596.156206005" lastFinishedPulling="2025-10-08 08:24:16.624253138 +0000 UTC m=+6599.753945739" observedRunningTime="2025-10-08 08:24:18.138078329 +0000 UTC m=+6601.267770940" watchObservedRunningTime="2025-10-08 08:24:18.145061938 +0000 UTC m=+6601.274754549" Oct 08 08:24:21 crc kubenswrapper[4958]: I1008 08:24:21.114362 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 08:24:21 crc kubenswrapper[4958]: I1008 08:24:21.931788 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:21 crc kubenswrapper[4958]: I1008 08:24:21.932155 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:22 crc kubenswrapper[4958]: I1008 08:24:22.224327 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s5dh6" event={"ID":"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85","Type":"ContainerStarted","Data":"e16b874155330c9cc82399413d1abd65123a2ab23ed87ab7fcd42dfb9067da3a"} Oct 08 08:24:22 crc kubenswrapper[4958]: I1008 08:24:22.274215 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-s5dh6" podStartSLOduration=2.970241693 podStartE2EDuration="7.274186018s" podCreationTimestamp="2025-10-08 08:24:15 +0000 UTC" firstStartedPulling="2025-10-08 08:24:16.828619845 +0000 UTC m=+6599.958312446" lastFinishedPulling="2025-10-08 08:24:21.13256417 +0000 UTC m=+6604.262256771" observedRunningTime="2025-10-08 08:24:22.246592939 +0000 UTC m=+6605.376285540" watchObservedRunningTime="2025-10-08 08:24:22.274186018 +0000 UTC m=+6605.403878649" Oct 08 08:24:22 crc kubenswrapper[4958]: I1008 08:24:22.498621 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 08 08:24:22 crc kubenswrapper[4958]: I1008 08:24:22.513603 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 08 08:24:23 crc kubenswrapper[4958]: I1008 08:24:23.025503 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5xfsg" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="registry-server" probeResult="failure" output=< Oct 08 08:24:23 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:24:23 crc kubenswrapper[4958]: > Oct 08 08:24:23 crc kubenswrapper[4958]: I1008 08:24:23.242978 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 08 08:24:24 crc kubenswrapper[4958]: I1008 08:24:24.247513 4958 generic.go:334] "Generic (PLEG): container finished" podID="8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" containerID="e16b874155330c9cc82399413d1abd65123a2ab23ed87ab7fcd42dfb9067da3a" exitCode=0 Oct 08 08:24:24 crc kubenswrapper[4958]: I1008 08:24:24.247613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s5dh6" event={"ID":"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85","Type":"ContainerDied","Data":"e16b874155330c9cc82399413d1abd65123a2ab23ed87ab7fcd42dfb9067da3a"} Oct 08 08:24:25 crc kubenswrapper[4958]: I1008 08:24:25.204354 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:24:25 crc kubenswrapper[4958]: I1008 08:24:25.204561 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e3272846-1a11-47c4-a1fd-f297c54dc462" containerName="kube-state-metrics" containerID="cri-o://a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee" gracePeriod=30 Oct 08 08:24:25 crc kubenswrapper[4958]: I1008 08:24:25.877889 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:25 crc kubenswrapper[4958]: I1008 08:24:25.885974 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.016012 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-scripts\") pod \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.016070 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs859\" (UniqueName: \"kubernetes.io/projected/e3272846-1a11-47c4-a1fd-f297c54dc462-kube-api-access-cs859\") pod \"e3272846-1a11-47c4-a1fd-f297c54dc462\" (UID: \"e3272846-1a11-47c4-a1fd-f297c54dc462\") " Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.016155 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-557fl\" (UniqueName: \"kubernetes.io/projected/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-kube-api-access-557fl\") pod \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.016251 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-combined-ca-bundle\") pod \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.016287 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-config-data\") pod \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\" (UID: \"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85\") " Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.028119 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3272846-1a11-47c4-a1fd-f297c54dc462-kube-api-access-cs859" (OuterVolumeSpecName: "kube-api-access-cs859") pod "e3272846-1a11-47c4-a1fd-f297c54dc462" (UID: "e3272846-1a11-47c4-a1fd-f297c54dc462"). InnerVolumeSpecName "kube-api-access-cs859". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.040459 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-kube-api-access-557fl" (OuterVolumeSpecName: "kube-api-access-557fl") pod "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" (UID: "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85"). InnerVolumeSpecName "kube-api-access-557fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.047779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-config-data" (OuterVolumeSpecName: "config-data") pod "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" (UID: "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.050204 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" (UID: "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.067175 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-scripts" (OuterVolumeSpecName: "scripts") pod "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" (UID: "8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.118157 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.118196 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs859\" (UniqueName: \"kubernetes.io/projected/e3272846-1a11-47c4-a1fd-f297c54dc462-kube-api-access-cs859\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.118209 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-557fl\" (UniqueName: \"kubernetes.io/projected/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-kube-api-access-557fl\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.118220 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.118230 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.269510 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3272846-1a11-47c4-a1fd-f297c54dc462" containerID="a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee" exitCode=2 Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.269575 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e3272846-1a11-47c4-a1fd-f297c54dc462","Type":"ContainerDied","Data":"a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee"} Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.269604 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e3272846-1a11-47c4-a1fd-f297c54dc462","Type":"ContainerDied","Data":"d684424901e0d0043937ab6ae816dec544e0bf0b205815d366296e41a4b91fb3"} Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.269599 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.269622 4958 scope.go:117] "RemoveContainer" containerID="a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.279488 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-s5dh6" event={"ID":"8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85","Type":"ContainerDied","Data":"6d95f6750e9da599bc50aef998b6ab6e8f1da6f8299f7f52d47741bebf5fb11a"} Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.279533 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d95f6750e9da599bc50aef998b6ab6e8f1da6f8299f7f52d47741bebf5fb11a" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.279544 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-s5dh6" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.312092 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.315161 4958 scope.go:117] "RemoveContainer" containerID="a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee" Oct 08 08:24:26 crc kubenswrapper[4958]: E1008 08:24:26.319808 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee\": container with ID starting with a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee not found: ID does not exist" containerID="a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.319968 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee"} err="failed to get container status \"a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee\": rpc error: code = NotFound desc = could not find container \"a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee\": container with ID starting with a4816bf1a3bc1dfaf0c68007f2b388ddef016ac378854e6b5ec456c1867d36ee not found: ID does not exist" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.328480 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.342978 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:24:26 crc kubenswrapper[4958]: E1008 08:24:26.343526 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3272846-1a11-47c4-a1fd-f297c54dc462" containerName="kube-state-metrics" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.343694 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3272846-1a11-47c4-a1fd-f297c54dc462" containerName="kube-state-metrics" Oct 08 08:24:26 crc kubenswrapper[4958]: E1008 08:24:26.343716 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" containerName="aodh-db-sync" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.343724 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" containerName="aodh-db-sync" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.344340 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3272846-1a11-47c4-a1fd-f297c54dc462" containerName="kube-state-metrics" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.344363 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" containerName="aodh-db-sync" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.345126 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.347093 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.347400 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.353238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.527253 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.527355 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.527495 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.528145 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgw2\" (UniqueName: \"kubernetes.io/projected/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-api-access-hxgw2\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.630159 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgw2\" (UniqueName: \"kubernetes.io/projected/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-api-access-hxgw2\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.630301 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.630356 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.630382 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.633625 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.634275 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.637710 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.647760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgw2\" (UniqueName: \"kubernetes.io/projected/dc1df803-110b-4f26-9e90-2c727d9bc1fa-kube-api-access-hxgw2\") pod \"kube-state-metrics-0\" (UID: \"dc1df803-110b-4f26-9e90-2c727d9bc1fa\") " pod="openstack/kube-state-metrics-0" Oct 08 08:24:26 crc kubenswrapper[4958]: I1008 08:24:26.692983 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.035860 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8w9dp"] Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.046941 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8w9dp"] Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.146767 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 08 08:24:27 crc kubenswrapper[4958]: W1008 08:24:27.147198 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc1df803_110b_4f26_9e90_2c727d9bc1fa.slice/crio-67013b5c5a9dba9b7fee4274b1c7c2c3458586ff245eab548f362d59ccb96905 WatchSource:0}: Error finding container 67013b5c5a9dba9b7fee4274b1c7c2c3458586ff245eab548f362d59ccb96905: Status 404 returned error can't find the container with id 67013b5c5a9dba9b7fee4274b1c7c2c3458586ff245eab548f362d59ccb96905 Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.230415 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.230748 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-central-agent" containerID="cri-o://919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff" gracePeriod=30 Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.230807 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="proxy-httpd" containerID="cri-o://ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e" gracePeriod=30 Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.231016 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-notification-agent" containerID="cri-o://a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719" gracePeriod=30 Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.231074 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="sg-core" containerID="cri-o://fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0" gracePeriod=30 Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.305405 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc1df803-110b-4f26-9e90-2c727d9bc1fa","Type":"ContainerStarted","Data":"67013b5c5a9dba9b7fee4274b1c7c2c3458586ff245eab548f362d59ccb96905"} Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.611449 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1935d212-d2e9-4c81-b5ee-ab05ab45cf51" path="/var/lib/kubelet/pods/1935d212-d2e9-4c81-b5ee-ab05ab45cf51/volumes" Oct 08 08:24:27 crc kubenswrapper[4958]: I1008 08:24:27.612155 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3272846-1a11-47c4-a1fd-f297c54dc462" path="/var/lib/kubelet/pods/e3272846-1a11-47c4-a1fd-f297c54dc462/volumes" Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.036449 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rqcvh"] Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.053401 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rqcvh"] Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.322662 4958 generic.go:334] "Generic (PLEG): container finished" podID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerID="ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e" exitCode=0 Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.323077 4958 generic.go:334] "Generic (PLEG): container finished" podID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerID="fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0" exitCode=2 Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.323093 4958 generic.go:334] "Generic (PLEG): container finished" podID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerID="919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff" exitCode=0 Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.323091 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerDied","Data":"ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e"} Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.323141 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerDied","Data":"fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0"} Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.323152 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerDied","Data":"919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff"} Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.336962 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc1df803-110b-4f26-9e90-2c727d9bc1fa","Type":"ContainerStarted","Data":"34aa59b26b08f22222c61262859c73c24eee39b8191a92ba16e3055ddd08e9b8"} Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.337350 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 08 08:24:28 crc kubenswrapper[4958]: I1008 08:24:28.359297 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7300203079999998 podStartE2EDuration="2.359278429s" podCreationTimestamp="2025-10-08 08:24:26 +0000 UTC" firstStartedPulling="2025-10-08 08:24:27.14985491 +0000 UTC m=+6610.279547511" lastFinishedPulling="2025-10-08 08:24:27.779113041 +0000 UTC m=+6610.908805632" observedRunningTime="2025-10-08 08:24:28.349888504 +0000 UTC m=+6611.479581105" watchObservedRunningTime="2025-10-08 08:24:28.359278429 +0000 UTC m=+6611.488971030" Oct 08 08:24:29 crc kubenswrapper[4958]: I1008 08:24:29.607141 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5f7719-4d19-4a73-857c-574ba6d31f44" path="/var/lib/kubelet/pods/8d5f7719-4d19-4a73-857c-574ba6d31f44/volumes" Oct 08 08:24:29 crc kubenswrapper[4958]: I1008 08:24:29.996100 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.124645 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-sg-core-conf-yaml\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.124715 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-config-data\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.124749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-combined-ca-bundle\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.124860 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjjd\" (UniqueName: \"kubernetes.io/projected/a5694e7b-fde4-4592-810a-36d2f12db19e-kube-api-access-cbjjd\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.124893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-log-httpd\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.125071 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-run-httpd\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.125126 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-scripts\") pod \"a5694e7b-fde4-4592-810a-36d2f12db19e\" (UID: \"a5694e7b-fde4-4592-810a-36d2f12db19e\") " Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.125548 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.125625 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.129800 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5694e7b-fde4-4592-810a-36d2f12db19e-kube-api-access-cbjjd" (OuterVolumeSpecName: "kube-api-access-cbjjd") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "kube-api-access-cbjjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.133289 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-scripts" (OuterVolumeSpecName: "scripts") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.165075 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.228074 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjjd\" (UniqueName: \"kubernetes.io/projected/a5694e7b-fde4-4592-810a-36d2f12db19e-kube-api-access-cbjjd\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.228114 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.228124 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5694e7b-fde4-4592-810a-36d2f12db19e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.228132 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.228141 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.231845 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.291837 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.292340 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="sg-core" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292352 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="sg-core" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.292418 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-central-agent" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292425 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-central-agent" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.292439 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="proxy-httpd" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292445 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="proxy-httpd" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.292462 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-notification-agent" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292468 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-notification-agent" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292673 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-central-agent" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292685 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="proxy-httpd" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292697 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="sg-core" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.292704 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerName="ceilometer-notification-agent" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.294509 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.297274 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7qbsb" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.298253 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.298557 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.298636 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.324606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-config-data" (OuterVolumeSpecName: "config-data") pod "a5694e7b-fde4-4592-810a-36d2f12db19e" (UID: "a5694e7b-fde4-4592-810a-36d2f12db19e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.331598 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.331719 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sczw4\" (UniqueName: \"kubernetes.io/projected/d1d34f3f-6806-488b-a32b-bb13f4d5976a-kube-api-access-sczw4\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.331789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-config-data\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.331850 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-scripts\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.332250 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.332275 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5694e7b-fde4-4592-810a-36d2f12db19e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.367912 4958 generic.go:334] "Generic (PLEG): container finished" podID="a5694e7b-fde4-4592-810a-36d2f12db19e" containerID="a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719" exitCode=0 Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.367964 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerDied","Data":"a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719"} Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.367989 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5694e7b-fde4-4592-810a-36d2f12db19e","Type":"ContainerDied","Data":"f1f05bfb84228b9bf6f11a3cb41efb1c8ebce40ac093466d6c4251636c51cdf5"} Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.368005 4958 scope.go:117] "RemoveContainer" containerID="ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.368120 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.406599 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.415106 4958 scope.go:117] "RemoveContainer" containerID="fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.417596 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.424598 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.442057 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.444800 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.445158 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.445340 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.447241 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sczw4\" (UniqueName: \"kubernetes.io/projected/d1d34f3f-6806-488b-a32b-bb13f4d5976a-kube-api-access-sczw4\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.447328 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-config-data\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.447383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-scripts\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.447481 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.457914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-config-data\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.465549 4958 scope.go:117] "RemoveContainer" containerID="a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.474583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.480104 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.485447 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-scripts\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.492921 4958 scope.go:117] "RemoveContainer" containerID="919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.494779 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sczw4\" (UniqueName: \"kubernetes.io/projected/d1d34f3f-6806-488b-a32b-bb13f4d5976a-kube-api-access-sczw4\") pod \"aodh-0\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.511234 4958 scope.go:117] "RemoveContainer" containerID="ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.511598 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e\": container with ID starting with ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e not found: ID does not exist" containerID="ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.511631 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e"} err="failed to get container status \"ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e\": rpc error: code = NotFound desc = could not find container \"ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e\": container with ID starting with ad4798c4368be1cd62f9bac9b342fda16987b5797aa59bd2a9e0e6e2b624636e not found: ID does not exist" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.511652 4958 scope.go:117] "RemoveContainer" containerID="fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.512851 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0\": container with ID starting with fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0 not found: ID does not exist" containerID="fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.512878 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0"} err="failed to get container status \"fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0\": rpc error: code = NotFound desc = could not find container \"fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0\": container with ID starting with fb7556e01234f8058904a31d52ef9eaac73e78899e32e265be1c77e7f7b125e0 not found: ID does not exist" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.512893 4958 scope.go:117] "RemoveContainer" containerID="a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.513079 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719\": container with ID starting with a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719 not found: ID does not exist" containerID="a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.513102 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719"} err="failed to get container status \"a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719\": rpc error: code = NotFound desc = could not find container \"a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719\": container with ID starting with a8e3719e665eef228d5ec700289811ad6d5054ab288fe2ee6a6e43a6f9942719 not found: ID does not exist" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.513115 4958 scope.go:117] "RemoveContainer" containerID="919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff" Oct 08 08:24:30 crc kubenswrapper[4958]: E1008 08:24:30.513360 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff\": container with ID starting with 919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff not found: ID does not exist" containerID="919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.513375 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff"} err="failed to get container status \"919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff\": rpc error: code = NotFound desc = could not find container \"919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff\": container with ID starting with 919e9ed8d0fc09c944c6bb148166d762f21096f38adb2e71a9b4434228ca12ff not found: ID does not exist" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.549873 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-scripts\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.549995 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-log-httpd\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.550036 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-config-data\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.550051 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.550100 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-run-httpd\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.550115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.550156 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.550186 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsw4g\" (UniqueName: \"kubernetes.io/projected/255ef5c5-c291-4404-a27f-65c24ac06f02-kube-api-access-jsw4g\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.631855 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.652739 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsw4g\" (UniqueName: \"kubernetes.io/projected/255ef5c5-c291-4404-a27f-65c24ac06f02-kube-api-access-jsw4g\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.652808 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-scripts\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.652930 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-log-httpd\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.653005 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-config-data\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.653031 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.653102 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-run-httpd\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.653123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.653182 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.653329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-log-httpd\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.654488 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-run-httpd\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.657073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.659090 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.659733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-scripts\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.661913 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.661978 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-config-data\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.672394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsw4g\" (UniqueName: \"kubernetes.io/projected/255ef5c5-c291-4404-a27f-65c24ac06f02-kube-api-access-jsw4g\") pod \"ceilometer-0\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " pod="openstack/ceilometer-0" Oct 08 08:24:30 crc kubenswrapper[4958]: I1008 08:24:30.770279 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:31 crc kubenswrapper[4958]: I1008 08:24:31.051927 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 08 08:24:31 crc kubenswrapper[4958]: W1008 08:24:31.063642 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1d34f3f_6806_488b_a32b_bb13f4d5976a.slice/crio-51857a0c206eecccbe3b800e56e45da9f3ef4eff2185c5bf50cd80c40f5f8e0a WatchSource:0}: Error finding container 51857a0c206eecccbe3b800e56e45da9f3ef4eff2185c5bf50cd80c40f5f8e0a: Status 404 returned error can't find the container with id 51857a0c206eecccbe3b800e56e45da9f3ef4eff2185c5bf50cd80c40f5f8e0a Oct 08 08:24:31 crc kubenswrapper[4958]: I1008 08:24:31.309733 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:31 crc kubenswrapper[4958]: W1008 08:24:31.314239 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod255ef5c5_c291_4404_a27f_65c24ac06f02.slice/crio-eb31aa694ecb8b9a063f200824b8798ccfa6cf57cfea819b345f7ce1f1d9a612 WatchSource:0}: Error finding container eb31aa694ecb8b9a063f200824b8798ccfa6cf57cfea819b345f7ce1f1d9a612: Status 404 returned error can't find the container with id eb31aa694ecb8b9a063f200824b8798ccfa6cf57cfea819b345f7ce1f1d9a612 Oct 08 08:24:31 crc kubenswrapper[4958]: I1008 08:24:31.378196 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerStarted","Data":"eb31aa694ecb8b9a063f200824b8798ccfa6cf57cfea819b345f7ce1f1d9a612"} Oct 08 08:24:31 crc kubenswrapper[4958]: I1008 08:24:31.380203 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerStarted","Data":"51857a0c206eecccbe3b800e56e45da9f3ef4eff2185c5bf50cd80c40f5f8e0a"} Oct 08 08:24:31 crc kubenswrapper[4958]: I1008 08:24:31.592230 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5694e7b-fde4-4592-810a-36d2f12db19e" path="/var/lib/kubelet/pods/a5694e7b-fde4-4592-810a-36d2f12db19e/volumes" Oct 08 08:24:31 crc kubenswrapper[4958]: I1008 08:24:31.982588 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:32 crc kubenswrapper[4958]: I1008 08:24:32.069612 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:32 crc kubenswrapper[4958]: I1008 08:24:32.231279 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5xfsg"] Oct 08 08:24:32 crc kubenswrapper[4958]: I1008 08:24:32.390594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerStarted","Data":"3b39b2809c2001fb6bfa6be9616e87350c643024f7a793f85b7199085aa97309"} Oct 08 08:24:32 crc kubenswrapper[4958]: I1008 08:24:32.393209 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerStarted","Data":"c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965"} Oct 08 08:24:33 crc kubenswrapper[4958]: I1008 08:24:33.403819 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5xfsg" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="registry-server" containerID="cri-o://55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc" gracePeriod=2 Oct 08 08:24:33 crc kubenswrapper[4958]: I1008 08:24:33.404008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerStarted","Data":"9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d"} Oct 08 08:24:33 crc kubenswrapper[4958]: I1008 08:24:33.553857 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 08 08:24:33 crc kubenswrapper[4958]: I1008 08:24:33.707261 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.117027 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.226698 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6qj7\" (UniqueName: \"kubernetes.io/projected/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-kube-api-access-k6qj7\") pod \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.227682 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-utilities\") pod \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.227748 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-catalog-content\") pod \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\" (UID: \"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9\") " Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.228564 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-utilities" (OuterVolumeSpecName: "utilities") pod "7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" (UID: "7bf7ca23-42bc-4d3b-be18-c65ac4745ee9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.240690 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-kube-api-access-k6qj7" (OuterVolumeSpecName: "kube-api-access-k6qj7") pod "7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" (UID: "7bf7ca23-42bc-4d3b-be18-c65ac4745ee9"). InnerVolumeSpecName "kube-api-access-k6qj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.273084 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" (UID: "7bf7ca23-42bc-4d3b-be18-c65ac4745ee9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.330233 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6qj7\" (UniqueName: \"kubernetes.io/projected/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-kube-api-access-k6qj7\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.330267 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.330277 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.416292 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerStarted","Data":"2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525"} Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.419418 4958 generic.go:334] "Generic (PLEG): container finished" podID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerID="55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc" exitCode=0 Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.419479 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerDied","Data":"55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc"} Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.419500 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xfsg" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.419515 4958 scope.go:117] "RemoveContainer" containerID="55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.419506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xfsg" event={"ID":"7bf7ca23-42bc-4d3b-be18-c65ac4745ee9","Type":"ContainerDied","Data":"ff3aa9d067dcf962ff416bfdf3c069b4ce1474af9b18a3d010f8f5284ca91c92"} Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.423584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerStarted","Data":"e9e991cff19900f61f8ab44ca93fd0d937d98d8e7aeae2b6d5749f3c4bfc79a5"} Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.447695 4958 scope.go:117] "RemoveContainer" containerID="bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.459844 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5xfsg"] Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.467012 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5xfsg"] Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.471314 4958 scope.go:117] "RemoveContainer" containerID="333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.493519 4958 scope.go:117] "RemoveContainer" containerID="55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc" Oct 08 08:24:34 crc kubenswrapper[4958]: E1008 08:24:34.493872 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc\": container with ID starting with 55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc not found: ID does not exist" containerID="55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.493902 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc"} err="failed to get container status \"55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc\": rpc error: code = NotFound desc = could not find container \"55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc\": container with ID starting with 55eb268161e76151901074980dccaca804b8c4e7984452d9c7405e762b39abcc not found: ID does not exist" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.493922 4958 scope.go:117] "RemoveContainer" containerID="bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b" Oct 08 08:24:34 crc kubenswrapper[4958]: E1008 08:24:34.494207 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b\": container with ID starting with bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b not found: ID does not exist" containerID="bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.494232 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b"} err="failed to get container status \"bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b\": rpc error: code = NotFound desc = could not find container \"bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b\": container with ID starting with bb7e1a14bec434fc98662665c32939356f2960a857cd992d5d7dfde73ba2566b not found: ID does not exist" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.494245 4958 scope.go:117] "RemoveContainer" containerID="333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29" Oct 08 08:24:34 crc kubenswrapper[4958]: E1008 08:24:34.494658 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29\": container with ID starting with 333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29 not found: ID does not exist" containerID="333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29" Oct 08 08:24:34 crc kubenswrapper[4958]: I1008 08:24:34.494679 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29"} err="failed to get container status \"333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29\": rpc error: code = NotFound desc = could not find container \"333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29\": container with ID starting with 333f97f3471ca0a2f21e73a42aff6a1e40ff6f587e1590e2767fad8333aa2c29 not found: ID does not exist" Oct 08 08:24:35 crc kubenswrapper[4958]: I1008 08:24:35.590353 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" path="/var/lib/kubelet/pods/7bf7ca23-42bc-4d3b-be18-c65ac4745ee9/volumes" Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.463579 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerStarted","Data":"e5c2d5ac93c29bd8e0485cdf73542344d4795dd363b5938e9bfe3ca345c545bb"} Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.474585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerStarted","Data":"22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355"} Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.474766 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-central-agent" containerID="cri-o://c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965" gracePeriod=30 Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.474918 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.475048 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="proxy-httpd" containerID="cri-o://22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355" gracePeriod=30 Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.475089 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-notification-agent" containerID="cri-o://9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d" gracePeriod=30 Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.475159 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="sg-core" containerID="cri-o://2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525" gracePeriod=30 Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.499694 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.080934429 podStartE2EDuration="6.499673089s" podCreationTimestamp="2025-10-08 08:24:30 +0000 UTC" firstStartedPulling="2025-10-08 08:24:31.316869589 +0000 UTC m=+6614.446562190" lastFinishedPulling="2025-10-08 08:24:35.735608249 +0000 UTC m=+6618.865300850" observedRunningTime="2025-10-08 08:24:36.493587724 +0000 UTC m=+6619.623280325" watchObservedRunningTime="2025-10-08 08:24:36.499673089 +0000 UTC m=+6619.629365690" Oct 08 08:24:36 crc kubenswrapper[4958]: I1008 08:24:36.707083 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.496014 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerStarted","Data":"29347ebe15d2c1f3a5a8a93571d2162ae2effb7eb489c8bc8ab09e8cb5695d14"} Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.496774 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-api" containerID="cri-o://3b39b2809c2001fb6bfa6be9616e87350c643024f7a793f85b7199085aa97309" gracePeriod=30 Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.497524 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-listener" containerID="cri-o://29347ebe15d2c1f3a5a8a93571d2162ae2effb7eb489c8bc8ab09e8cb5695d14" gracePeriod=30 Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.497622 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-notifier" containerID="cri-o://e5c2d5ac93c29bd8e0485cdf73542344d4795dd363b5938e9bfe3ca345c545bb" gracePeriod=30 Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.497688 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-evaluator" containerID="cri-o://e9e991cff19900f61f8ab44ca93fd0d937d98d8e7aeae2b6d5749f3c4bfc79a5" gracePeriod=30 Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.512153 4958 generic.go:334] "Generic (PLEG): container finished" podID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerID="22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355" exitCode=0 Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.512864 4958 generic.go:334] "Generic (PLEG): container finished" podID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerID="2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525" exitCode=2 Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.512244 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerDied","Data":"22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355"} Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.513034 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerDied","Data":"2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525"} Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.513133 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerDied","Data":"9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d"} Oct 08 08:24:37 crc kubenswrapper[4958]: I1008 08:24:37.512968 4958 generic.go:334] "Generic (PLEG): container finished" podID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerID="9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d" exitCode=0 Oct 08 08:24:38 crc kubenswrapper[4958]: I1008 08:24:38.528453 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerID="e5c2d5ac93c29bd8e0485cdf73542344d4795dd363b5938e9bfe3ca345c545bb" exitCode=0 Oct 08 08:24:38 crc kubenswrapper[4958]: I1008 08:24:38.528822 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerID="e9e991cff19900f61f8ab44ca93fd0d937d98d8e7aeae2b6d5749f3c4bfc79a5" exitCode=0 Oct 08 08:24:38 crc kubenswrapper[4958]: I1008 08:24:38.528834 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerID="3b39b2809c2001fb6bfa6be9616e87350c643024f7a793f85b7199085aa97309" exitCode=0 Oct 08 08:24:38 crc kubenswrapper[4958]: I1008 08:24:38.528544 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerDied","Data":"e5c2d5ac93c29bd8e0485cdf73542344d4795dd363b5938e9bfe3ca345c545bb"} Oct 08 08:24:38 crc kubenswrapper[4958]: I1008 08:24:38.528882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerDied","Data":"e9e991cff19900f61f8ab44ca93fd0d937d98d8e7aeae2b6d5749f3c4bfc79a5"} Oct 08 08:24:38 crc kubenswrapper[4958]: I1008 08:24:38.528903 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerDied","Data":"3b39b2809c2001fb6bfa6be9616e87350c643024f7a793f85b7199085aa97309"} Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.518058 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.576298 4958 generic.go:334] "Generic (PLEG): container finished" podID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerID="c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965" exitCode=0 Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.576355 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerDied","Data":"c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965"} Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.576407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"255ef5c5-c291-4404-a27f-65c24ac06f02","Type":"ContainerDied","Data":"eb31aa694ecb8b9a063f200824b8798ccfa6cf57cfea819b345f7ce1f1d9a612"} Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.576430 4958 scope.go:117] "RemoveContainer" containerID="22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.576624 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=4.709576749 podStartE2EDuration="10.576605662s" podCreationTimestamp="2025-10-08 08:24:30 +0000 UTC" firstStartedPulling="2025-10-08 08:24:31.077614304 +0000 UTC m=+6614.207306905" lastFinishedPulling="2025-10-08 08:24:36.944643217 +0000 UTC m=+6620.074335818" observedRunningTime="2025-10-08 08:24:37.52758681 +0000 UTC m=+6620.657279461" watchObservedRunningTime="2025-10-08 08:24:40.576605662 +0000 UTC m=+6623.706298263" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.576670 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.623147 4958 scope.go:117] "RemoveContainer" containerID="2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.640804 4958 scope.go:117] "RemoveContainer" containerID="9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.664186 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsw4g\" (UniqueName: \"kubernetes.io/projected/255ef5c5-c291-4404-a27f-65c24ac06f02-kube-api-access-jsw4g\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.664244 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-ceilometer-tls-certs\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.664314 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-log-httpd\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.664940 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.665480 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-config-data\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.665779 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-sg-core-conf-yaml\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.665831 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-scripts\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.665897 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-run-httpd\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.665933 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-combined-ca-bundle\") pod \"255ef5c5-c291-4404-a27f-65c24ac06f02\" (UID: \"255ef5c5-c291-4404-a27f-65c24ac06f02\") " Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.666504 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.666502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.670989 4958 scope.go:117] "RemoveContainer" containerID="c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.684871 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255ef5c5-c291-4404-a27f-65c24ac06f02-kube-api-access-jsw4g" (OuterVolumeSpecName: "kube-api-access-jsw4g") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "kube-api-access-jsw4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.685227 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-scripts" (OuterVolumeSpecName: "scripts") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.698092 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.729008 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.755728 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.768297 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.768319 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/255ef5c5-c291-4404-a27f-65c24ac06f02-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.768330 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.768341 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsw4g\" (UniqueName: \"kubernetes.io/projected/255ef5c5-c291-4404-a27f-65c24ac06f02-kube-api-access-jsw4g\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.768351 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.768359 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.784867 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-config-data" (OuterVolumeSpecName: "config-data") pod "255ef5c5-c291-4404-a27f-65c24ac06f02" (UID: "255ef5c5-c291-4404-a27f-65c24ac06f02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.804701 4958 scope.go:117] "RemoveContainer" containerID="22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.805185 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355\": container with ID starting with 22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355 not found: ID does not exist" containerID="22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.805235 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355"} err="failed to get container status \"22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355\": rpc error: code = NotFound desc = could not find container \"22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355\": container with ID starting with 22a1481cad5673effea09fe07a23977ba04df21900492630cdd19579e95dd355 not found: ID does not exist" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.805261 4958 scope.go:117] "RemoveContainer" containerID="2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.805697 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525\": container with ID starting with 2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525 not found: ID does not exist" containerID="2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.805729 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525"} err="failed to get container status \"2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525\": rpc error: code = NotFound desc = could not find container \"2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525\": container with ID starting with 2307b896dead67e293f79a6f782524a4f08581353d2b3083a48afd5b63267525 not found: ID does not exist" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.805752 4958 scope.go:117] "RemoveContainer" containerID="9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.805922 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d\": container with ID starting with 9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d not found: ID does not exist" containerID="9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.805942 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d"} err="failed to get container status \"9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d\": rpc error: code = NotFound desc = could not find container \"9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d\": container with ID starting with 9e91cd3978e16a74373f3ae1841fb2b8cdc006158158df7559e88fec2801352d not found: ID does not exist" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.805968 4958 scope.go:117] "RemoveContainer" containerID="c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.806125 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965\": container with ID starting with c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965 not found: ID does not exist" containerID="c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.806139 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965"} err="failed to get container status \"c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965\": rpc error: code = NotFound desc = could not find container \"c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965\": container with ID starting with c0259ed010ca24cba5c9237c2cdedf5e3cfc4f3f7e589b48fe84b5bc8c757965 not found: ID does not exist" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.870328 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255ef5c5-c291-4404-a27f-65c24ac06f02-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.920327 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.935662 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.950247 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951097 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="registry-server" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951136 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="registry-server" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951178 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="extract-utilities" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951192 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="extract-utilities" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951218 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="proxy-httpd" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951231 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="proxy-httpd" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951260 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="sg-core" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951273 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="sg-core" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951292 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="extract-content" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951304 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="extract-content" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951329 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-central-agent" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951341 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-central-agent" Oct 08 08:24:40 crc kubenswrapper[4958]: E1008 08:24:40.951384 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-notification-agent" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951399 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-notification-agent" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951766 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="proxy-httpd" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951796 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf7ca23-42bc-4d3b-be18-c65ac4745ee9" containerName="registry-server" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951824 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-notification-agent" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951848 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="sg-core" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.951872 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" containerName="ceilometer-central-agent" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.956804 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.958704 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.958934 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.960351 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:40 crc kubenswrapper[4958]: I1008 08:24:40.970364 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.074240 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.074461 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-scripts\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.075133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27s42\" (UniqueName: \"kubernetes.io/projected/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-kube-api-access-27s42\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.075469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.075627 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-log-httpd\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.075873 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.076175 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-config-data\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.076439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-run-httpd\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27s42\" (UniqueName: \"kubernetes.io/projected/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-kube-api-access-27s42\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-log-httpd\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178672 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-config-data\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-run-httpd\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178754 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.178787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-scripts\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.179897 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-run-httpd\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.180347 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-log-httpd\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.182254 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.190684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-config-data\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.191846 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.192234 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.192577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-scripts\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.210030 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27s42\" (UniqueName: \"kubernetes.io/projected/4d3f4216-471c-41b2-ae15-4b1be93e0d9e-kube-api-access-27s42\") pod \"ceilometer-0\" (UID: \"4d3f4216-471c-41b2-ae15-4b1be93e0d9e\") " pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.283854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.593491 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255ef5c5-c291-4404-a27f-65c24ac06f02" path="/var/lib/kubelet/pods/255ef5c5-c291-4404-a27f-65c24ac06f02/volumes" Oct 08 08:24:41 crc kubenswrapper[4958]: I1008 08:24:41.734590 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 08 08:24:41 crc kubenswrapper[4958]: W1008 08:24:41.747448 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d3f4216_471c_41b2_ae15_4b1be93e0d9e.slice/crio-9d3db9c7070be232ef32eb0984857b50c95e878084f8c3c5b987fa760c88b163 WatchSource:0}: Error finding container 9d3db9c7070be232ef32eb0984857b50c95e878084f8c3c5b987fa760c88b163: Status 404 returned error can't find the container with id 9d3db9c7070be232ef32eb0984857b50c95e878084f8c3c5b987fa760c88b163 Oct 08 08:24:42 crc kubenswrapper[4958]: I1008 08:24:42.619138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d3f4216-471c-41b2-ae15-4b1be93e0d9e","Type":"ContainerStarted","Data":"9d3db9c7070be232ef32eb0984857b50c95e878084f8c3c5b987fa760c88b163"} Oct 08 08:24:43 crc kubenswrapper[4958]: I1008 08:24:43.638931 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d3f4216-471c-41b2-ae15-4b1be93e0d9e","Type":"ContainerStarted","Data":"a4acb1d8fd97480fc0bc21f6cbe18ceb588ec884a04e6faac3634652e1dfea9c"} Oct 08 08:24:44 crc kubenswrapper[4958]: I1008 08:24:44.651625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d3f4216-471c-41b2-ae15-4b1be93e0d9e","Type":"ContainerStarted","Data":"39167a65648aa644a405ba725ce8a407d224a93edbd767ed06bd0381e8150eb8"} Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.494141 4958 scope.go:117] "RemoveContainer" containerID="bb295622c3cc249b100557163f1bc96534a129c27069f11ff8699be17efb8ac4" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.538003 4958 scope.go:117] "RemoveContainer" containerID="c0a7854764001634a4f783a5e28f996fd7ab7ad772016e0b964f80196ca5ced3" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.614374 4958 scope.go:117] "RemoveContainer" containerID="091d14067a0bf10ae12e0afdcf2678a428aa1ec0f4f033673ee8ceeb66d25343" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.675385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d3f4216-471c-41b2-ae15-4b1be93e0d9e","Type":"ContainerStarted","Data":"fe7d06cd8fbedf8dc09d12a037bcb8faa4e3f0cda1e9cfccdc40c44fcf045739"} Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.688274 4958 scope.go:117] "RemoveContainer" containerID="6df63c94af515efe1b133a31bd470dedb7637ceb741cf777158f608d33edc1e2" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.714529 4958 scope.go:117] "RemoveContainer" containerID="50f561a3bb3782d7dad8661353b16e498557817022b89f9a69f917e6c9a445a7" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.733833 4958 scope.go:117] "RemoveContainer" containerID="465f0d12f0d50aab4eada26f0aa7c4269a49dc69b63986ec81b72dd028c7fe06" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.774348 4958 scope.go:117] "RemoveContainer" containerID="f26ad18ce56ad72bda7c90168ba980f98fab3aa6be7be10d5f8b91fc1ee235ed" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.797183 4958 scope.go:117] "RemoveContainer" containerID="73f61f13c4b8b3fea109ae0994f246675e6c6e877a831c35356e341ed7b201d5" Oct 08 08:24:45 crc kubenswrapper[4958]: I1008 08:24:45.840353 4958 scope.go:117] "RemoveContainer" containerID="0505df006522a6a3a9ea2bf229b57de57076bc1fead80c3cde433e8c84c63aaf" Oct 08 08:24:46 crc kubenswrapper[4958]: I1008 08:24:46.059722 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqrbt"] Oct 08 08:24:46 crc kubenswrapper[4958]: I1008 08:24:46.066809 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qqrbt"] Oct 08 08:24:46 crc kubenswrapper[4958]: I1008 08:24:46.696548 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d3f4216-471c-41b2-ae15-4b1be93e0d9e","Type":"ContainerStarted","Data":"0876cc46d6a40b75d8277f915a9e40bc17c5ff9fbac1ef85a5b69ab0933bbcc5"} Oct 08 08:24:46 crc kubenswrapper[4958]: I1008 08:24:46.698996 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 08 08:24:46 crc kubenswrapper[4958]: I1008 08:24:46.746593 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.253353565 podStartE2EDuration="6.746417273s" podCreationTimestamp="2025-10-08 08:24:40 +0000 UTC" firstStartedPulling="2025-10-08 08:24:41.758166353 +0000 UTC m=+6624.887858974" lastFinishedPulling="2025-10-08 08:24:46.251230081 +0000 UTC m=+6629.380922682" observedRunningTime="2025-10-08 08:24:46.738857808 +0000 UTC m=+6629.868550429" watchObservedRunningTime="2025-10-08 08:24:46.746417273 +0000 UTC m=+6629.876109894" Oct 08 08:24:47 crc kubenswrapper[4958]: I1008 08:24:47.599214 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ea6282-163f-464c-9371-4ed478535967" path="/var/lib/kubelet/pods/83ea6282-163f-464c-9371-4ed478535967/volumes" Oct 08 08:25:07 crc kubenswrapper[4958]: I1008 08:25:07.950574 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerID="29347ebe15d2c1f3a5a8a93571d2162ae2effb7eb489c8bc8ab09e8cb5695d14" exitCode=137 Oct 08 08:25:07 crc kubenswrapper[4958]: I1008 08:25:07.950644 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerDied","Data":"29347ebe15d2c1f3a5a8a93571d2162ae2effb7eb489c8bc8ab09e8cb5695d14"} Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.107394 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.141936 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle\") pod \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.142059 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-config-data\") pod \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.142153 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sczw4\" (UniqueName: \"kubernetes.io/projected/d1d34f3f-6806-488b-a32b-bb13f4d5976a-kube-api-access-sczw4\") pod \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.142179 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-scripts\") pod \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.158097 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-scripts" (OuterVolumeSpecName: "scripts") pod "d1d34f3f-6806-488b-a32b-bb13f4d5976a" (UID: "d1d34f3f-6806-488b-a32b-bb13f4d5976a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.169206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d34f3f-6806-488b-a32b-bb13f4d5976a-kube-api-access-sczw4" (OuterVolumeSpecName: "kube-api-access-sczw4") pod "d1d34f3f-6806-488b-a32b-bb13f4d5976a" (UID: "d1d34f3f-6806-488b-a32b-bb13f4d5976a"). InnerVolumeSpecName "kube-api-access-sczw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.244163 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sczw4\" (UniqueName: \"kubernetes.io/projected/d1d34f3f-6806-488b-a32b-bb13f4d5976a-kube-api-access-sczw4\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.244313 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-scripts\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.293054 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-config-data" (OuterVolumeSpecName: "config-data") pod "d1d34f3f-6806-488b-a32b-bb13f4d5976a" (UID: "d1d34f3f-6806-488b-a32b-bb13f4d5976a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.344759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1d34f3f-6806-488b-a32b-bb13f4d5976a" (UID: "d1d34f3f-6806-488b-a32b-bb13f4d5976a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.344833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle\") pod \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\" (UID: \"d1d34f3f-6806-488b-a32b-bb13f4d5976a\") " Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.345182 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:08 crc kubenswrapper[4958]: W1008 08:25:08.345257 4958 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d1d34f3f-6806-488b-a32b-bb13f4d5976a/volumes/kubernetes.io~secret/combined-ca-bundle Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.345272 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1d34f3f-6806-488b-a32b-bb13f4d5976a" (UID: "d1d34f3f-6806-488b-a32b-bb13f4d5976a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.446270 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d34f3f-6806-488b-a32b-bb13f4d5976a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.966152 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d1d34f3f-6806-488b-a32b-bb13f4d5976a","Type":"ContainerDied","Data":"51857a0c206eecccbe3b800e56e45da9f3ef4eff2185c5bf50cd80c40f5f8e0a"} Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.966285 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.966651 4958 scope.go:117] "RemoveContainer" containerID="29347ebe15d2c1f3a5a8a93571d2162ae2effb7eb489c8bc8ab09e8cb5695d14" Oct 08 08:25:08 crc kubenswrapper[4958]: I1008 08:25:08.993652 4958 scope.go:117] "RemoveContainer" containerID="e5c2d5ac93c29bd8e0485cdf73542344d4795dd363b5938e9bfe3ca345c545bb" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.021316 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.026141 4958 scope.go:117] "RemoveContainer" containerID="e9e991cff19900f61f8ab44ca93fd0d937d98d8e7aeae2b6d5749f3c4bfc79a5" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.054192 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.068023 4958 scope.go:117] "RemoveContainer" containerID="3b39b2809c2001fb6bfa6be9616e87350c643024f7a793f85b7199085aa97309" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.072408 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 08 08:25:09 crc kubenswrapper[4958]: E1008 08:25:09.073008 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-listener" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073033 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-listener" Oct 08 08:25:09 crc kubenswrapper[4958]: E1008 08:25:09.073071 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-api" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073101 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-api" Oct 08 08:25:09 crc kubenswrapper[4958]: E1008 08:25:09.073130 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-evaluator" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073140 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-evaluator" Oct 08 08:25:09 crc kubenswrapper[4958]: E1008 08:25:09.073164 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-notifier" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073172 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-notifier" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073427 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-notifier" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073458 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-evaluator" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073470 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-listener" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.073490 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" containerName="aodh-api" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.075812 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.083029 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.083456 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7qbsb" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.084691 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.085004 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.085277 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.097257 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.264739 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-config-data\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.264851 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzlx\" (UniqueName: \"kubernetes.io/projected/94046b33-1212-42f7-a71e-c26cfbcf3815-kube-api-access-rdzlx\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.264884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-internal-tls-certs\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.264959 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-combined-ca-bundle\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.264995 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-scripts\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.265030 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-public-tls-certs\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.366489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-combined-ca-bundle\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.366547 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-scripts\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.366590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-public-tls-certs\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.366624 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-config-data\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.366697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzlx\" (UniqueName: \"kubernetes.io/projected/94046b33-1212-42f7-a71e-c26cfbcf3815-kube-api-access-rdzlx\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.366728 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-internal-tls-certs\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.371860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-scripts\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.373771 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-internal-tls-certs\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.374073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-config-data\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.377588 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-combined-ca-bundle\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.382802 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94046b33-1212-42f7-a71e-c26cfbcf3815-public-tls-certs\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.394730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzlx\" (UniqueName: \"kubernetes.io/projected/94046b33-1212-42f7-a71e-c26cfbcf3815-kube-api-access-rdzlx\") pod \"aodh-0\" (UID: \"94046b33-1212-42f7-a71e-c26cfbcf3815\") " pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.403619 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.593463 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d34f3f-6806-488b-a32b-bb13f4d5976a" path="/var/lib/kubelet/pods/d1d34f3f-6806-488b-a32b-bb13f4d5976a/volumes" Oct 08 08:25:09 crc kubenswrapper[4958]: I1008 08:25:09.974231 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 08 08:25:11 crc kubenswrapper[4958]: I1008 08:25:11.006397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"94046b33-1212-42f7-a71e-c26cfbcf3815","Type":"ContainerStarted","Data":"c41c289e970c1a3708b71775edd6204d6ea6dd9a4a62413110427f77d57d0d19"} Oct 08 08:25:11 crc kubenswrapper[4958]: I1008 08:25:11.006649 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"94046b33-1212-42f7-a71e-c26cfbcf3815","Type":"ContainerStarted","Data":"e957168dd51e7ad6f31d84450ed2a700d132fbaad73711f66f28375270c50d60"} Oct 08 08:25:11 crc kubenswrapper[4958]: I1008 08:25:11.308958 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 08 08:25:12 crc kubenswrapper[4958]: I1008 08:25:12.040198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"94046b33-1212-42f7-a71e-c26cfbcf3815","Type":"ContainerStarted","Data":"eb5d1800a486cd70fb5b98949479795f8364a498520cadb3f369bccc8eb62bc9"} Oct 08 08:25:13 crc kubenswrapper[4958]: I1008 08:25:13.054607 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"94046b33-1212-42f7-a71e-c26cfbcf3815","Type":"ContainerStarted","Data":"28368b1f5eebd7b7b34c270dc700433c3da93e9d394825402b54d2305c65c358"} Oct 08 08:25:13 crc kubenswrapper[4958]: I1008 08:25:13.055167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"94046b33-1212-42f7-a71e-c26cfbcf3815","Type":"ContainerStarted","Data":"77981c959069d411b8af9a68cb8f346a2e23dc94e07a5ba910f6a0861e316fdb"} Oct 08 08:25:13 crc kubenswrapper[4958]: I1008 08:25:13.085683 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.338922518 podStartE2EDuration="4.085667525s" podCreationTimestamp="2025-10-08 08:25:09 +0000 UTC" firstStartedPulling="2025-10-08 08:25:09.984815286 +0000 UTC m=+6653.114507887" lastFinishedPulling="2025-10-08 08:25:12.731560293 +0000 UTC m=+6655.861252894" observedRunningTime="2025-10-08 08:25:13.081294046 +0000 UTC m=+6656.210986657" watchObservedRunningTime="2025-10-08 08:25:13.085667525 +0000 UTC m=+6656.215360126" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.025713 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784cb9b9cc-j8xzf"] Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.030417 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.038728 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784cb9b9cc-j8xzf"] Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.043429 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.104928 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-config\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.105078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-dns-svc\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.105136 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-sb\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.105182 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gbk\" (UniqueName: \"kubernetes.io/projected/010eab6f-32d7-4e16-aa44-197ef320a63f-kube-api-access-r7gbk\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.105323 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-nb\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.105383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-openstack-cell1\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208050 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-nb\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-openstack-cell1\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-config\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208535 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-dns-svc\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208635 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-sb\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7gbk\" (UniqueName: \"kubernetes.io/projected/010eab6f-32d7-4e16-aa44-197ef320a63f-kube-api-access-r7gbk\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208922 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-openstack-cell1\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.208921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-nb\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.209434 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-dns-svc\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.209533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-config\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.209982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-sb\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.232639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7gbk\" (UniqueName: \"kubernetes.io/projected/010eab6f-32d7-4e16-aa44-197ef320a63f-kube-api-access-r7gbk\") pod \"dnsmasq-dns-784cb9b9cc-j8xzf\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.349559 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:20 crc kubenswrapper[4958]: I1008 08:25:20.843258 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784cb9b9cc-j8xzf"] Oct 08 08:25:21 crc kubenswrapper[4958]: I1008 08:25:21.146108 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" event={"ID":"010eab6f-32d7-4e16-aa44-197ef320a63f","Type":"ContainerStarted","Data":"d8d4cf3588b43e1a8fa30466c5355babf5e63af3aacbd27a1422013f725e2b69"} Oct 08 08:25:21 crc kubenswrapper[4958]: I1008 08:25:21.146218 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" event={"ID":"010eab6f-32d7-4e16-aa44-197ef320a63f","Type":"ContainerStarted","Data":"9bf6c952dd2e4b774e7f4125a76ebc318db1717b4b46810d30187a006bcd14ad"} Oct 08 08:25:22 crc kubenswrapper[4958]: I1008 08:25:22.157868 4958 generic.go:334] "Generic (PLEG): container finished" podID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerID="d8d4cf3588b43e1a8fa30466c5355babf5e63af3aacbd27a1422013f725e2b69" exitCode=0 Oct 08 08:25:22 crc kubenswrapper[4958]: I1008 08:25:22.157978 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" event={"ID":"010eab6f-32d7-4e16-aa44-197ef320a63f","Type":"ContainerDied","Data":"d8d4cf3588b43e1a8fa30466c5355babf5e63af3aacbd27a1422013f725e2b69"} Oct 08 08:25:22 crc kubenswrapper[4958]: I1008 08:25:22.158212 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" event={"ID":"010eab6f-32d7-4e16-aa44-197ef320a63f","Type":"ContainerStarted","Data":"eb5019af0825e1b36707710b56c67a5a56ab0007f03f198a46b26f54fca20037"} Oct 08 08:25:22 crc kubenswrapper[4958]: I1008 08:25:22.158406 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:22 crc kubenswrapper[4958]: I1008 08:25:22.193447 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" podStartSLOduration=3.193431262 podStartE2EDuration="3.193431262s" podCreationTimestamp="2025-10-08 08:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:25:22.18637324 +0000 UTC m=+6665.316065841" watchObservedRunningTime="2025-10-08 08:25:22.193431262 +0000 UTC m=+6665.323123863" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.351283 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.454584 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78f6dfd77-6hg6s"] Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.454796 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerName="dnsmasq-dns" containerID="cri-o://c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d" gracePeriod=10 Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.627005 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-599688d56f-pv87c"] Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.644023 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.649070 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-599688d56f-pv87c"] Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.747977 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-dns-svc\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.748116 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-openstack-cell1\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.748156 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-config\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.749173 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-ovsdbserver-nb\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.749550 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8p9\" (UniqueName: \"kubernetes.io/projected/8e1183c9-43a6-4761-96ee-e8edd06a074b-kube-api-access-9q8p9\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.750111 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-ovsdbserver-sb\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.852830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-dns-svc\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.852902 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-openstack-cell1\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.853673 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-dns-svc\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.853849 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-openstack-cell1\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.853992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-config\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.854026 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-ovsdbserver-nb\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.854118 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8p9\" (UniqueName: \"kubernetes.io/projected/8e1183c9-43a6-4761-96ee-e8edd06a074b-kube-api-access-9q8p9\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.854282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-ovsdbserver-sb\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.854553 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-config\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.855053 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-ovsdbserver-sb\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.855128 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e1183c9-43a6-4761-96ee-e8edd06a074b-ovsdbserver-nb\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.882189 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8p9\" (UniqueName: \"kubernetes.io/projected/8e1183c9-43a6-4761-96ee-e8edd06a074b-kube-api-access-9q8p9\") pod \"dnsmasq-dns-599688d56f-pv87c\" (UID: \"8e1183c9-43a6-4761-96ee-e8edd06a074b\") " pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:30 crc kubenswrapper[4958]: I1008 08:25:30.978905 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.088778 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.165424 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdrvw\" (UniqueName: \"kubernetes.io/projected/c47e8a4d-fe22-4a8f-9959-750046c2dbff-kube-api-access-qdrvw\") pod \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.165524 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-config\") pod \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.165605 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-dns-svc\") pod \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.165632 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-nb\") pod \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.165672 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-sb\") pod \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\" (UID: \"c47e8a4d-fe22-4a8f-9959-750046c2dbff\") " Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.175577 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47e8a4d-fe22-4a8f-9959-750046c2dbff-kube-api-access-qdrvw" (OuterVolumeSpecName: "kube-api-access-qdrvw") pod "c47e8a4d-fe22-4a8f-9959-750046c2dbff" (UID: "c47e8a4d-fe22-4a8f-9959-750046c2dbff"). InnerVolumeSpecName "kube-api-access-qdrvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.225362 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-config" (OuterVolumeSpecName: "config") pod "c47e8a4d-fe22-4a8f-9959-750046c2dbff" (UID: "c47e8a4d-fe22-4a8f-9959-750046c2dbff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.230688 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c47e8a4d-fe22-4a8f-9959-750046c2dbff" (UID: "c47e8a4d-fe22-4a8f-9959-750046c2dbff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.239073 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c47e8a4d-fe22-4a8f-9959-750046c2dbff" (UID: "c47e8a4d-fe22-4a8f-9959-750046c2dbff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.259325 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c47e8a4d-fe22-4a8f-9959-750046c2dbff" (UID: "c47e8a4d-fe22-4a8f-9959-750046c2dbff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.266975 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdrvw\" (UniqueName: \"kubernetes.io/projected/c47e8a4d-fe22-4a8f-9959-750046c2dbff-kube-api-access-qdrvw\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.267002 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.267011 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.267020 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.267029 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c47e8a4d-fe22-4a8f-9959-750046c2dbff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.273748 4958 generic.go:334] "Generic (PLEG): container finished" podID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerID="c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d" exitCode=0 Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.273794 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" event={"ID":"c47e8a4d-fe22-4a8f-9959-750046c2dbff","Type":"ContainerDied","Data":"c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d"} Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.273819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" event={"ID":"c47e8a4d-fe22-4a8f-9959-750046c2dbff","Type":"ContainerDied","Data":"1c9b77b7ca9a8d996a4e8a9055e6bba0a4f1934e35b736bfeb9c67aaa252c835"} Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.273835 4958 scope.go:117] "RemoveContainer" containerID="c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.273984 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f6dfd77-6hg6s" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.309183 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78f6dfd77-6hg6s"] Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.312415 4958 scope.go:117] "RemoveContainer" containerID="49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.316944 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78f6dfd77-6hg6s"] Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.334497 4958 scope.go:117] "RemoveContainer" containerID="c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d" Oct 08 08:25:31 crc kubenswrapper[4958]: E1008 08:25:31.335292 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d\": container with ID starting with c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d not found: ID does not exist" containerID="c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.335337 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d"} err="failed to get container status \"c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d\": rpc error: code = NotFound desc = could not find container \"c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d\": container with ID starting with c655b62ff67368ed6a10b1f2951965e9441fc519ab2f4a368e1fcedc59abde6d not found: ID does not exist" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.335367 4958 scope.go:117] "RemoveContainer" containerID="49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f" Oct 08 08:25:31 crc kubenswrapper[4958]: E1008 08:25:31.337300 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f\": container with ID starting with 49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f not found: ID does not exist" containerID="49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.337328 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f"} err="failed to get container status \"49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f\": rpc error: code = NotFound desc = could not find container \"49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f\": container with ID starting with 49929e8edbaf7bf8ecd2efb44c81eeaf18bc5d8ff7e6025bca12570d9dfd5f3f not found: ID does not exist" Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.490842 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-599688d56f-pv87c"] Oct 08 08:25:31 crc kubenswrapper[4958]: I1008 08:25:31.604577 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" path="/var/lib/kubelet/pods/c47e8a4d-fe22-4a8f-9959-750046c2dbff/volumes" Oct 08 08:25:32 crc kubenswrapper[4958]: I1008 08:25:32.283342 4958 generic.go:334] "Generic (PLEG): container finished" podID="8e1183c9-43a6-4761-96ee-e8edd06a074b" containerID="1ccc0a91e341fc146a922506b4a22ec0e9caca19f33236f5fe1d5dfbf2877605" exitCode=0 Oct 08 08:25:32 crc kubenswrapper[4958]: I1008 08:25:32.283530 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599688d56f-pv87c" event={"ID":"8e1183c9-43a6-4761-96ee-e8edd06a074b","Type":"ContainerDied","Data":"1ccc0a91e341fc146a922506b4a22ec0e9caca19f33236f5fe1d5dfbf2877605"} Oct 08 08:25:32 crc kubenswrapper[4958]: I1008 08:25:32.283580 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599688d56f-pv87c" event={"ID":"8e1183c9-43a6-4761-96ee-e8edd06a074b","Type":"ContainerStarted","Data":"d0caf39e15606f9ba81eb6556f122c79c7d1432492286b5451670f3a2e97b4df"} Oct 08 08:25:33 crc kubenswrapper[4958]: I1008 08:25:33.313259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-599688d56f-pv87c" event={"ID":"8e1183c9-43a6-4761-96ee-e8edd06a074b","Type":"ContainerStarted","Data":"e1ddc0ccc4ad47a62ede21f2c3f951e1e6b11b9b318fde517039776783d55d8e"} Oct 08 08:25:33 crc kubenswrapper[4958]: I1008 08:25:33.336985 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-599688d56f-pv87c" podStartSLOduration=3.336969207 podStartE2EDuration="3.336969207s" podCreationTimestamp="2025-10-08 08:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:25:33.335411305 +0000 UTC m=+6676.465103966" watchObservedRunningTime="2025-10-08 08:25:33.336969207 +0000 UTC m=+6676.466661808" Oct 08 08:25:34 crc kubenswrapper[4958]: I1008 08:25:34.323183 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:36 crc kubenswrapper[4958]: I1008 08:25:36.844923 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:25:36 crc kubenswrapper[4958]: I1008 08:25:36.845524 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:25:40 crc kubenswrapper[4958]: I1008 08:25:40.981333 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-599688d56f-pv87c" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.075435 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784cb9b9cc-j8xzf"] Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.075921 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerName="dnsmasq-dns" containerID="cri-o://eb5019af0825e1b36707710b56c67a5a56ab0007f03f198a46b26f54fca20037" gracePeriod=10 Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.416127 4958 generic.go:334] "Generic (PLEG): container finished" podID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerID="eb5019af0825e1b36707710b56c67a5a56ab0007f03f198a46b26f54fca20037" exitCode=0 Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.416415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" event={"ID":"010eab6f-32d7-4e16-aa44-197ef320a63f","Type":"ContainerDied","Data":"eb5019af0825e1b36707710b56c67a5a56ab0007f03f198a46b26f54fca20037"} Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.629389 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.760311 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-openstack-cell1\") pod \"010eab6f-32d7-4e16-aa44-197ef320a63f\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.760437 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-config\") pod \"010eab6f-32d7-4e16-aa44-197ef320a63f\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.760503 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-dns-svc\") pod \"010eab6f-32d7-4e16-aa44-197ef320a63f\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.760534 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-sb\") pod \"010eab6f-32d7-4e16-aa44-197ef320a63f\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.760573 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7gbk\" (UniqueName: \"kubernetes.io/projected/010eab6f-32d7-4e16-aa44-197ef320a63f-kube-api-access-r7gbk\") pod \"010eab6f-32d7-4e16-aa44-197ef320a63f\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.760594 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-nb\") pod \"010eab6f-32d7-4e16-aa44-197ef320a63f\" (UID: \"010eab6f-32d7-4e16-aa44-197ef320a63f\") " Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.784991 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010eab6f-32d7-4e16-aa44-197ef320a63f-kube-api-access-r7gbk" (OuterVolumeSpecName: "kube-api-access-r7gbk") pod "010eab6f-32d7-4e16-aa44-197ef320a63f" (UID: "010eab6f-32d7-4e16-aa44-197ef320a63f"). InnerVolumeSpecName "kube-api-access-r7gbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.815742 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "010eab6f-32d7-4e16-aa44-197ef320a63f" (UID: "010eab6f-32d7-4e16-aa44-197ef320a63f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.821156 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "010eab6f-32d7-4e16-aa44-197ef320a63f" (UID: "010eab6f-32d7-4e16-aa44-197ef320a63f"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.828021 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "010eab6f-32d7-4e16-aa44-197ef320a63f" (UID: "010eab6f-32d7-4e16-aa44-197ef320a63f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.834153 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-config" (OuterVolumeSpecName: "config") pod "010eab6f-32d7-4e16-aa44-197ef320a63f" (UID: "010eab6f-32d7-4e16-aa44-197ef320a63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.840891 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "010eab6f-32d7-4e16-aa44-197ef320a63f" (UID: "010eab6f-32d7-4e16-aa44-197ef320a63f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.864020 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-config\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.864389 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.864403 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.864423 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7gbk\" (UniqueName: \"kubernetes.io/projected/010eab6f-32d7-4e16-aa44-197ef320a63f-kube-api-access-r7gbk\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.864436 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:41 crc kubenswrapper[4958]: I1008 08:25:41.864450 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/010eab6f-32d7-4e16-aa44-197ef320a63f-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 08 08:25:42 crc kubenswrapper[4958]: I1008 08:25:42.428696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" event={"ID":"010eab6f-32d7-4e16-aa44-197ef320a63f","Type":"ContainerDied","Data":"9bf6c952dd2e4b774e7f4125a76ebc318db1717b4b46810d30187a006bcd14ad"} Oct 08 08:25:42 crc kubenswrapper[4958]: I1008 08:25:42.428759 4958 scope.go:117] "RemoveContainer" containerID="eb5019af0825e1b36707710b56c67a5a56ab0007f03f198a46b26f54fca20037" Oct 08 08:25:42 crc kubenswrapper[4958]: I1008 08:25:42.428969 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784cb9b9cc-j8xzf" Oct 08 08:25:42 crc kubenswrapper[4958]: I1008 08:25:42.460758 4958 scope.go:117] "RemoveContainer" containerID="d8d4cf3588b43e1a8fa30466c5355babf5e63af3aacbd27a1422013f725e2b69" Oct 08 08:25:42 crc kubenswrapper[4958]: I1008 08:25:42.483718 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784cb9b9cc-j8xzf"] Oct 08 08:25:42 crc kubenswrapper[4958]: I1008 08:25:42.522658 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784cb9b9cc-j8xzf"] Oct 08 08:25:43 crc kubenswrapper[4958]: I1008 08:25:43.599906 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" path="/var/lib/kubelet/pods/010eab6f-32d7-4e16-aa44-197ef320a63f/volumes" Oct 08 08:25:46 crc kubenswrapper[4958]: I1008 08:25:46.225632 4958 scope.go:117] "RemoveContainer" containerID="9b7e97aa01d8c1a92c538a6c98dfaf8ff151b4c1e66f278e23ffd09849b72a32" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.046992 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp"] Oct 08 08:25:52 crc kubenswrapper[4958]: E1008 08:25:52.048026 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerName="dnsmasq-dns" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.048042 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerName="dnsmasq-dns" Oct 08 08:25:52 crc kubenswrapper[4958]: E1008 08:25:52.048069 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerName="init" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.048076 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerName="init" Oct 08 08:25:52 crc kubenswrapper[4958]: E1008 08:25:52.048089 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerName="init" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.048098 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerName="init" Oct 08 08:25:52 crc kubenswrapper[4958]: E1008 08:25:52.048108 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerName="dnsmasq-dns" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.048115 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerName="dnsmasq-dns" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.048383 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47e8a4d-fe22-4a8f-9959-750046c2dbff" containerName="dnsmasq-dns" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.048406 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="010eab6f-32d7-4e16-aa44-197ef320a63f" containerName="dnsmasq-dns" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.050516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.053046 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.053533 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.054265 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.056238 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.066168 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp"] Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.229380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqd22\" (UniqueName: \"kubernetes.io/projected/c58f658e-42ed-4353-9aad-b3042bfaf65f-kube-api-access-kqd22\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.229473 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.229986 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.230067 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.333088 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.333153 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.333384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqd22\" (UniqueName: \"kubernetes.io/projected/c58f658e-42ed-4353-9aad-b3042bfaf65f-kube-api-access-kqd22\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.333449 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.341853 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.341986 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.343332 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.386813 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqd22\" (UniqueName: \"kubernetes.io/projected/c58f658e-42ed-4353-9aad-b3042bfaf65f-kube-api-access-kqd22\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:52 crc kubenswrapper[4958]: I1008 08:25:52.387393 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:25:53 crc kubenswrapper[4958]: I1008 08:25:53.045301 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp"] Oct 08 08:25:53 crc kubenswrapper[4958]: W1008 08:25:53.057897 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc58f658e_42ed_4353_9aad_b3042bfaf65f.slice/crio-a25444ed4d6e5a3c59023d03040b40e2a909d9b1b0c2a5faf366a0aa4d151533 WatchSource:0}: Error finding container a25444ed4d6e5a3c59023d03040b40e2a909d9b1b0c2a5faf366a0aa4d151533: Status 404 returned error can't find the container with id a25444ed4d6e5a3c59023d03040b40e2a909d9b1b0c2a5faf366a0aa4d151533 Oct 08 08:25:53 crc kubenswrapper[4958]: I1008 08:25:53.574646 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" event={"ID":"c58f658e-42ed-4353-9aad-b3042bfaf65f","Type":"ContainerStarted","Data":"a25444ed4d6e5a3c59023d03040b40e2a909d9b1b0c2a5faf366a0aa4d151533"} Oct 08 08:26:03 crc kubenswrapper[4958]: I1008 08:26:03.720772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" event={"ID":"c58f658e-42ed-4353-9aad-b3042bfaf65f","Type":"ContainerStarted","Data":"f356108ece92c6014319e50fb8cb393fd1ba31122f62c3a2508192f63096833b"} Oct 08 08:26:03 crc kubenswrapper[4958]: I1008 08:26:03.746172 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" podStartSLOduration=1.4345339959999999 podStartE2EDuration="11.746154591s" podCreationTimestamp="2025-10-08 08:25:52 +0000 UTC" firstStartedPulling="2025-10-08 08:25:53.061418989 +0000 UTC m=+6696.191111600" lastFinishedPulling="2025-10-08 08:26:03.373039564 +0000 UTC m=+6706.502732195" observedRunningTime="2025-10-08 08:26:03.745205116 +0000 UTC m=+6706.874897717" watchObservedRunningTime="2025-10-08 08:26:03.746154591 +0000 UTC m=+6706.875847192" Oct 08 08:26:06 crc kubenswrapper[4958]: I1008 08:26:06.844927 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:26:06 crc kubenswrapper[4958]: I1008 08:26:06.846278 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:26:18 crc kubenswrapper[4958]: I1008 08:26:18.047268 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-84wfd"] Oct 08 08:26:18 crc kubenswrapper[4958]: I1008 08:26:18.055001 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-84wfd"] Oct 08 08:26:18 crc kubenswrapper[4958]: I1008 08:26:18.917987 4958 generic.go:334] "Generic (PLEG): container finished" podID="c58f658e-42ed-4353-9aad-b3042bfaf65f" containerID="f356108ece92c6014319e50fb8cb393fd1ba31122f62c3a2508192f63096833b" exitCode=0 Oct 08 08:26:18 crc kubenswrapper[4958]: I1008 08:26:18.918041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" event={"ID":"c58f658e-42ed-4353-9aad-b3042bfaf65f","Type":"ContainerDied","Data":"f356108ece92c6014319e50fb8cb393fd1ba31122f62c3a2508192f63096833b"} Oct 08 08:26:19 crc kubenswrapper[4958]: I1008 08:26:19.622539 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dcfd6f7-4693-4af1-891f-f51707e488d3" path="/var/lib/kubelet/pods/9dcfd6f7-4693-4af1-891f-f51707e488d3/volumes" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.554013 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.650270 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-inventory\") pod \"c58f658e-42ed-4353-9aad-b3042bfaf65f\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.650392 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-pre-adoption-validation-combined-ca-bundle\") pod \"c58f658e-42ed-4353-9aad-b3042bfaf65f\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.650463 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-ssh-key\") pod \"c58f658e-42ed-4353-9aad-b3042bfaf65f\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.650664 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqd22\" (UniqueName: \"kubernetes.io/projected/c58f658e-42ed-4353-9aad-b3042bfaf65f-kube-api-access-kqd22\") pod \"c58f658e-42ed-4353-9aad-b3042bfaf65f\" (UID: \"c58f658e-42ed-4353-9aad-b3042bfaf65f\") " Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.657543 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "c58f658e-42ed-4353-9aad-b3042bfaf65f" (UID: "c58f658e-42ed-4353-9aad-b3042bfaf65f"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.660205 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58f658e-42ed-4353-9aad-b3042bfaf65f-kube-api-access-kqd22" (OuterVolumeSpecName: "kube-api-access-kqd22") pod "c58f658e-42ed-4353-9aad-b3042bfaf65f" (UID: "c58f658e-42ed-4353-9aad-b3042bfaf65f"). InnerVolumeSpecName "kube-api-access-kqd22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.695120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c58f658e-42ed-4353-9aad-b3042bfaf65f" (UID: "c58f658e-42ed-4353-9aad-b3042bfaf65f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.701245 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-inventory" (OuterVolumeSpecName: "inventory") pod "c58f658e-42ed-4353-9aad-b3042bfaf65f" (UID: "c58f658e-42ed-4353-9aad-b3042bfaf65f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.763792 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.763822 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqd22\" (UniqueName: \"kubernetes.io/projected/c58f658e-42ed-4353-9aad-b3042bfaf65f-kube-api-access-kqd22\") on node \"crc\" DevicePath \"\"" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.763831 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.763840 4958 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58f658e-42ed-4353-9aad-b3042bfaf65f-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.951075 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.951070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp" event={"ID":"c58f658e-42ed-4353-9aad-b3042bfaf65f","Type":"ContainerDied","Data":"a25444ed4d6e5a3c59023d03040b40e2a909d9b1b0c2a5faf366a0aa4d151533"} Oct 08 08:26:20 crc kubenswrapper[4958]: I1008 08:26:20.951244 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a25444ed4d6e5a3c59023d03040b40e2a909d9b1b0c2a5faf366a0aa4d151533" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.741052 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v"] Oct 08 08:26:25 crc kubenswrapper[4958]: E1008 08:26:25.742110 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58f658e-42ed-4353-9aad-b3042bfaf65f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.742128 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58f658e-42ed-4353-9aad-b3042bfaf65f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.742391 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58f658e-42ed-4353-9aad-b3042bfaf65f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.743397 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.745224 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.745605 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.747387 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.747648 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.758097 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v"] Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.805432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.805491 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.805630 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.805700 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqt5p\" (UniqueName: \"kubernetes.io/projected/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-kube-api-access-rqt5p\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.907917 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.908093 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.908156 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqt5p\" (UniqueName: \"kubernetes.io/projected/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-kube-api-access-rqt5p\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.908348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.915896 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.916330 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.920632 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:25 crc kubenswrapper[4958]: I1008 08:26:25.932657 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqt5p\" (UniqueName: \"kubernetes.io/projected/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-kube-api-access-rqt5p\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:26 crc kubenswrapper[4958]: I1008 08:26:26.088565 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:26:26 crc kubenswrapper[4958]: I1008 08:26:26.687720 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v"] Oct 08 08:26:27 crc kubenswrapper[4958]: I1008 08:26:27.052342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" event={"ID":"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b","Type":"ContainerStarted","Data":"1983eb61a58df7568132ab06570222a620d716fc12642210961234831294a30d"} Oct 08 08:26:28 crc kubenswrapper[4958]: I1008 08:26:28.070963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" event={"ID":"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b","Type":"ContainerStarted","Data":"0a679e4b647d138299f4a9f7ba30dccbdaab4dd58a494512d9825a90db2d3e84"} Oct 08 08:26:28 crc kubenswrapper[4958]: I1008 08:26:28.099377 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" podStartSLOduration=2.65152128 podStartE2EDuration="3.099346536s" podCreationTimestamp="2025-10-08 08:26:25 +0000 UTC" firstStartedPulling="2025-10-08 08:26:26.677228234 +0000 UTC m=+6729.806920835" lastFinishedPulling="2025-10-08 08:26:27.12505347 +0000 UTC m=+6730.254746091" observedRunningTime="2025-10-08 08:26:28.097743052 +0000 UTC m=+6731.227435683" watchObservedRunningTime="2025-10-08 08:26:28.099346536 +0000 UTC m=+6731.229039177" Oct 08 08:26:29 crc kubenswrapper[4958]: I1008 08:26:29.032153 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-d938-account-create-ddspb"] Oct 08 08:26:29 crc kubenswrapper[4958]: I1008 08:26:29.042449 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-d938-account-create-ddspb"] Oct 08 08:26:29 crc kubenswrapper[4958]: I1008 08:26:29.587744 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74895726-5db5-4a55-b25a-4c39bac9e3a9" path="/var/lib/kubelet/pods/74895726-5db5-4a55-b25a-4c39bac9e3a9/volumes" Oct 08 08:26:35 crc kubenswrapper[4958]: I1008 08:26:35.046575 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-ck6fd"] Oct 08 08:26:35 crc kubenswrapper[4958]: I1008 08:26:35.056426 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-ck6fd"] Oct 08 08:26:35 crc kubenswrapper[4958]: I1008 08:26:35.592063 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383a0658-d336-4643-9db4-ee4d3643a7ad" path="/var/lib/kubelet/pods/383a0658-d336-4643-9db4-ee4d3643a7ad/volumes" Oct 08 08:26:36 crc kubenswrapper[4958]: I1008 08:26:36.845049 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:26:36 crc kubenswrapper[4958]: I1008 08:26:36.845482 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:26:36 crc kubenswrapper[4958]: I1008 08:26:36.845547 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:26:36 crc kubenswrapper[4958]: I1008 08:26:36.847000 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:26:36 crc kubenswrapper[4958]: I1008 08:26:36.847099 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" gracePeriod=600 Oct 08 08:26:36 crc kubenswrapper[4958]: E1008 08:26:36.988846 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:26:37 crc kubenswrapper[4958]: I1008 08:26:37.178278 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" exitCode=0 Oct 08 08:26:37 crc kubenswrapper[4958]: I1008 08:26:37.178328 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342"} Oct 08 08:26:37 crc kubenswrapper[4958]: I1008 08:26:37.178367 4958 scope.go:117] "RemoveContainer" containerID="871b2ddb043bf24b6fc601ecc25cc84c3454c68175ee8a91d681a6cd3aa935d5" Oct 08 08:26:37 crc kubenswrapper[4958]: I1008 08:26:37.179191 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:26:37 crc kubenswrapper[4958]: E1008 08:26:37.179835 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.049713 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-103b-account-create-7sv7f"] Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.066089 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-103b-account-create-7sv7f"] Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.384084 4958 scope.go:117] "RemoveContainer" containerID="130cb96bbc6c24007ce44038d93dfcb9ea898002054dba4d777f9898a5588b34" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.411207 4958 scope.go:117] "RemoveContainer" containerID="d2a883a8e5ac7f1c87d90f3e4ff4d3b6212715c255037c2c718e7c2707434a78" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.448337 4958 scope.go:117] "RemoveContainer" containerID="adf2542d2315cdcfa6c81dd90d20ac3384ba0823058d5654a578f384afcc5143" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.489080 4958 scope.go:117] "RemoveContainer" containerID="cb160f56f4a5dd12c225a7a2b59059438df98258d1feb96f9520017912926893" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.563623 4958 scope.go:117] "RemoveContainer" containerID="65c0de218ff7e978339c38dafa13ebdfb843172596c16909d0e7af2e05901c00" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.633501 4958 scope.go:117] "RemoveContainer" containerID="f81842d6f59e71aebedcf8e7b0c143ca598ee441d414849255f53e2d8d287d2e" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.676094 4958 scope.go:117] "RemoveContainer" containerID="913670e3283ff1c65c2d593c133964ee959fa992d12f57df209c391786a07d9d" Oct 08 08:26:46 crc kubenswrapper[4958]: I1008 08:26:46.862254 4958 scope.go:117] "RemoveContainer" containerID="1f8903cdf8caae6b96fa0af0cf79a58ee2bb885130a02be6c37c9929a120d418" Oct 08 08:26:47 crc kubenswrapper[4958]: I1008 08:26:47.598417 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f" path="/var/lib/kubelet/pods/ddbec1e2-a129-4cb2-ad56-811cf7ed8b4f/volumes" Oct 08 08:26:51 crc kubenswrapper[4958]: I1008 08:26:51.576504 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:26:51 crc kubenswrapper[4958]: E1008 08:26:51.577186 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:27:05 crc kubenswrapper[4958]: I1008 08:27:05.576919 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:27:05 crc kubenswrapper[4958]: E1008 08:27:05.578074 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:27:19 crc kubenswrapper[4958]: I1008 08:27:19.578026 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:27:19 crc kubenswrapper[4958]: E1008 08:27:19.579007 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:27:32 crc kubenswrapper[4958]: I1008 08:27:32.577204 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:27:32 crc kubenswrapper[4958]: E1008 08:27:32.578120 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:27:36 crc kubenswrapper[4958]: I1008 08:27:36.070289 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-fgmk8"] Oct 08 08:27:36 crc kubenswrapper[4958]: I1008 08:27:36.081541 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-fgmk8"] Oct 08 08:27:37 crc kubenswrapper[4958]: I1008 08:27:37.614096 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a" path="/var/lib/kubelet/pods/a1e77cb5-6ac8-4eb6-ba57-1b08eae3155a/volumes" Oct 08 08:27:46 crc kubenswrapper[4958]: I1008 08:27:46.577319 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:27:46 crc kubenswrapper[4958]: E1008 08:27:46.579756 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:27:46 crc kubenswrapper[4958]: I1008 08:27:46.999179 4958 scope.go:117] "RemoveContainer" containerID="136a6aa5c8849a905ee3a27a0fccb2669ec6b8b243d3cfa61b5a9fc7ca18d260" Oct 08 08:27:47 crc kubenswrapper[4958]: I1008 08:27:47.046019 4958 scope.go:117] "RemoveContainer" containerID="6e00baa3972dcae121923021a972c4bb324f7738dede3a511a018bb232300b57" Oct 08 08:27:47 crc kubenswrapper[4958]: I1008 08:27:47.133119 4958 scope.go:117] "RemoveContainer" containerID="de545f2c479786b3f9ba809f4d1b22dd187dfae5a251fbdfa2cd26baca57991f" Oct 08 08:27:57 crc kubenswrapper[4958]: I1008 08:27:57.599225 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:27:57 crc kubenswrapper[4958]: E1008 08:27:57.600094 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:28:11 crc kubenswrapper[4958]: I1008 08:28:11.576592 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:28:11 crc kubenswrapper[4958]: E1008 08:28:11.577840 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:28:24 crc kubenswrapper[4958]: I1008 08:28:24.577292 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:28:24 crc kubenswrapper[4958]: E1008 08:28:24.578562 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:28:36 crc kubenswrapper[4958]: I1008 08:28:36.576811 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:28:36 crc kubenswrapper[4958]: E1008 08:28:36.577758 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:28:47 crc kubenswrapper[4958]: I1008 08:28:47.231821 4958 scope.go:117] "RemoveContainer" containerID="78f467a30cf792e928e5d33e97fdaee624cdbd1faead519dc9d4887db37a6cf7" Oct 08 08:28:47 crc kubenswrapper[4958]: I1008 08:28:47.291636 4958 scope.go:117] "RemoveContainer" containerID="2c7a7da73538fba3ddffa07e3d7af934bf9b95e372939faebbd3660ee8f9eac3" Oct 08 08:28:47 crc kubenswrapper[4958]: I1008 08:28:47.360040 4958 scope.go:117] "RemoveContainer" containerID="a7d32d52829fade81506a7c6c73ee9d8e22af8b01de4848be487677a45fd8915" Oct 08 08:28:51 crc kubenswrapper[4958]: I1008 08:28:51.578024 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:28:51 crc kubenswrapper[4958]: E1008 08:28:51.579611 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:29:03 crc kubenswrapper[4958]: I1008 08:29:03.576344 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:29:03 crc kubenswrapper[4958]: E1008 08:29:03.577571 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:29:16 crc kubenswrapper[4958]: I1008 08:29:16.578391 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:29:16 crc kubenswrapper[4958]: E1008 08:29:16.579613 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:29:31 crc kubenswrapper[4958]: I1008 08:29:31.577013 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:29:31 crc kubenswrapper[4958]: E1008 08:29:31.578479 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:29:43 crc kubenswrapper[4958]: I1008 08:29:43.577716 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:29:43 crc kubenswrapper[4958]: E1008 08:29:43.578589 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:29:47 crc kubenswrapper[4958]: I1008 08:29:47.481670 4958 scope.go:117] "RemoveContainer" containerID="914e822c17b2b1e40dcfe8f3fc13b4f753db32389d03475a3a80d09ccfffd5ab" Oct 08 08:29:57 crc kubenswrapper[4958]: I1008 08:29:57.590940 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:29:57 crc kubenswrapper[4958]: E1008 08:29:57.592279 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.193238 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk"] Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.196603 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.200291 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.204504 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk"] Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.211803 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.357836 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/324381e9-d522-445c-a944-9a69916fb000-secret-volume\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.358243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnzg\" (UniqueName: \"kubernetes.io/projected/324381e9-d522-445c-a944-9a69916fb000-kube-api-access-6vnzg\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.358455 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324381e9-d522-445c-a944-9a69916fb000-config-volume\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.461452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/324381e9-d522-445c-a944-9a69916fb000-secret-volume\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.461620 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnzg\" (UniqueName: \"kubernetes.io/projected/324381e9-d522-445c-a944-9a69916fb000-kube-api-access-6vnzg\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.461698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324381e9-d522-445c-a944-9a69916fb000-config-volume\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.463413 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324381e9-d522-445c-a944-9a69916fb000-config-volume\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.470746 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/324381e9-d522-445c-a944-9a69916fb000-secret-volume\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.497839 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnzg\" (UniqueName: \"kubernetes.io/projected/324381e9-d522-445c-a944-9a69916fb000-kube-api-access-6vnzg\") pod \"collect-profiles-29331870-c2gtk\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:00 crc kubenswrapper[4958]: I1008 08:30:00.536632 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:01 crc kubenswrapper[4958]: I1008 08:30:01.061641 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk"] Oct 08 08:30:01 crc kubenswrapper[4958]: I1008 08:30:01.853403 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" event={"ID":"324381e9-d522-445c-a944-9a69916fb000","Type":"ContainerStarted","Data":"5a737a9e373621adecae6251c5639882b1040324a9390aa11a37240a510146c3"} Oct 08 08:30:01 crc kubenswrapper[4958]: I1008 08:30:01.853689 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" event={"ID":"324381e9-d522-445c-a944-9a69916fb000","Type":"ContainerStarted","Data":"a8801df36ddc24c1723725341655f95dcb32bb5ec00dd9980b815f526d0da2ea"} Oct 08 08:30:01 crc kubenswrapper[4958]: I1008 08:30:01.881269 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" podStartSLOduration=1.8812509849999999 podStartE2EDuration="1.881250985s" podCreationTimestamp="2025-10-08 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:30:01.877461892 +0000 UTC m=+6945.007154503" watchObservedRunningTime="2025-10-08 08:30:01.881250985 +0000 UTC m=+6945.010943606" Oct 08 08:30:02 crc kubenswrapper[4958]: I1008 08:30:02.869682 4958 generic.go:334] "Generic (PLEG): container finished" podID="324381e9-d522-445c-a944-9a69916fb000" containerID="5a737a9e373621adecae6251c5639882b1040324a9390aa11a37240a510146c3" exitCode=0 Oct 08 08:30:02 crc kubenswrapper[4958]: I1008 08:30:02.869734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" event={"ID":"324381e9-d522-445c-a944-9a69916fb000","Type":"ContainerDied","Data":"5a737a9e373621adecae6251c5639882b1040324a9390aa11a37240a510146c3"} Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.347593 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.459580 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324381e9-d522-445c-a944-9a69916fb000-config-volume\") pod \"324381e9-d522-445c-a944-9a69916fb000\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.460024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/324381e9-d522-445c-a944-9a69916fb000-secret-volume\") pod \"324381e9-d522-445c-a944-9a69916fb000\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.460149 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnzg\" (UniqueName: \"kubernetes.io/projected/324381e9-d522-445c-a944-9a69916fb000-kube-api-access-6vnzg\") pod \"324381e9-d522-445c-a944-9a69916fb000\" (UID: \"324381e9-d522-445c-a944-9a69916fb000\") " Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.460541 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/324381e9-d522-445c-a944-9a69916fb000-config-volume" (OuterVolumeSpecName: "config-volume") pod "324381e9-d522-445c-a944-9a69916fb000" (UID: "324381e9-d522-445c-a944-9a69916fb000"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.461224 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324381e9-d522-445c-a944-9a69916fb000-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.468352 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324381e9-d522-445c-a944-9a69916fb000-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "324381e9-d522-445c-a944-9a69916fb000" (UID: "324381e9-d522-445c-a944-9a69916fb000"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.476295 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324381e9-d522-445c-a944-9a69916fb000-kube-api-access-6vnzg" (OuterVolumeSpecName: "kube-api-access-6vnzg") pod "324381e9-d522-445c-a944-9a69916fb000" (UID: "324381e9-d522-445c-a944-9a69916fb000"). InnerVolumeSpecName "kube-api-access-6vnzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.563597 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/324381e9-d522-445c-a944-9a69916fb000-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.563637 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnzg\" (UniqueName: \"kubernetes.io/projected/324381e9-d522-445c-a944-9a69916fb000-kube-api-access-6vnzg\") on node \"crc\" DevicePath \"\"" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.893527 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" event={"ID":"324381e9-d522-445c-a944-9a69916fb000","Type":"ContainerDied","Data":"a8801df36ddc24c1723725341655f95dcb32bb5ec00dd9980b815f526d0da2ea"} Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.893886 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8801df36ddc24c1723725341655f95dcb32bb5ec00dd9980b815f526d0da2ea" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.893593 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk" Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.980423 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p"] Oct 08 08:30:04 crc kubenswrapper[4958]: I1008 08:30:04.991135 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331825-vxz7p"] Oct 08 08:30:05 crc kubenswrapper[4958]: I1008 08:30:05.602529 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc2b48cb-ed9c-4fff-91e3-7485b2b591d6" path="/var/lib/kubelet/pods/dc2b48cb-ed9c-4fff-91e3-7485b2b591d6/volumes" Oct 08 08:30:08 crc kubenswrapper[4958]: I1008 08:30:08.577017 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:30:08 crc kubenswrapper[4958]: E1008 08:30:08.577778 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:30:22 crc kubenswrapper[4958]: I1008 08:30:22.577234 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:30:22 crc kubenswrapper[4958]: E1008 08:30:22.578595 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:30:36 crc kubenswrapper[4958]: I1008 08:30:36.576604 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:30:36 crc kubenswrapper[4958]: E1008 08:30:36.577487 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:30:47 crc kubenswrapper[4958]: I1008 08:30:47.582939 4958 scope.go:117] "RemoveContainer" containerID="55588398582d75047ea9c3f933e2be654e92ff94c9dd8bf384870134b08f655f" Oct 08 08:30:47 crc kubenswrapper[4958]: I1008 08:30:47.586327 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:30:47 crc kubenswrapper[4958]: E1008 08:30:47.586770 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.566178 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7slbn"] Oct 08 08:30:48 crc kubenswrapper[4958]: E1008 08:30:48.567315 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324381e9-d522-445c-a944-9a69916fb000" containerName="collect-profiles" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.567349 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="324381e9-d522-445c-a944-9a69916fb000" containerName="collect-profiles" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.567760 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="324381e9-d522-445c-a944-9a69916fb000" containerName="collect-profiles" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.570648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.597557 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7slbn"] Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.704717 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw2zk\" (UniqueName: \"kubernetes.io/projected/e98e93c7-813e-4a09-bfa7-132a54e13130-kube-api-access-zw2zk\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.705119 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-catalog-content\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.706346 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-utilities\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.808902 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw2zk\" (UniqueName: \"kubernetes.io/projected/e98e93c7-813e-4a09-bfa7-132a54e13130-kube-api-access-zw2zk\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.809038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-catalog-content\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.809087 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-utilities\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.809604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-utilities\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.809869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-catalog-content\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.830061 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw2zk\" (UniqueName: \"kubernetes.io/projected/e98e93c7-813e-4a09-bfa7-132a54e13130-kube-api-access-zw2zk\") pod \"redhat-operators-7slbn\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:48 crc kubenswrapper[4958]: I1008 08:30:48.918255 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.181547 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5hgq"] Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.186839 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.195477 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5hgq"] Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.233789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-catalog-content\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.233903 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-utilities\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.234133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmrw\" (UniqueName: \"kubernetes.io/projected/9d8eef66-4d62-495f-9170-afb85dc7ccc7-kube-api-access-zfmrw\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.336598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-catalog-content\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.336674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-utilities\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.336776 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmrw\" (UniqueName: \"kubernetes.io/projected/9d8eef66-4d62-495f-9170-afb85dc7ccc7-kube-api-access-zfmrw\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.337426 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-catalog-content\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.337570 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-utilities\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.377364 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmrw\" (UniqueName: \"kubernetes.io/projected/9d8eef66-4d62-495f-9170-afb85dc7ccc7-kube-api-access-zfmrw\") pod \"community-operators-d5hgq\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.567668 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7slbn"] Oct 08 08:30:49 crc kubenswrapper[4958]: I1008 08:30:49.568511 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:50 crc kubenswrapper[4958]: W1008 08:30:50.130437 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d8eef66_4d62_495f_9170_afb85dc7ccc7.slice/crio-7aba51a6226cd91829a58b44c8755aba7fc2180b90f77ddd35340a64741d27a9 WatchSource:0}: Error finding container 7aba51a6226cd91829a58b44c8755aba7fc2180b90f77ddd35340a64741d27a9: Status 404 returned error can't find the container with id 7aba51a6226cd91829a58b44c8755aba7fc2180b90f77ddd35340a64741d27a9 Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.135979 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5hgq"] Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.436883 4958 generic.go:334] "Generic (PLEG): container finished" podID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerID="9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8" exitCode=0 Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.436957 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerDied","Data":"9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8"} Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.437204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerStarted","Data":"55f1a70cb0a72afb7a45687bc51b2d73d0f78026903c36d28f415b8e54a78959"} Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.438971 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.441444 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerID="94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2" exitCode=0 Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.441476 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerDied","Data":"94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2"} Oct 08 08:30:50 crc kubenswrapper[4958]: I1008 08:30:50.441498 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerStarted","Data":"7aba51a6226cd91829a58b44c8755aba7fc2180b90f77ddd35340a64741d27a9"} Oct 08 08:30:52 crc kubenswrapper[4958]: I1008 08:30:52.465481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerStarted","Data":"bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e"} Oct 08 08:30:52 crc kubenswrapper[4958]: I1008 08:30:52.470855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerStarted","Data":"d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7"} Oct 08 08:30:53 crc kubenswrapper[4958]: I1008 08:30:53.488660 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerID="bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e" exitCode=0 Oct 08 08:30:53 crc kubenswrapper[4958]: I1008 08:30:53.489137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerDied","Data":"bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e"} Oct 08 08:30:54 crc kubenswrapper[4958]: I1008 08:30:54.511460 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerStarted","Data":"90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741"} Oct 08 08:30:54 crc kubenswrapper[4958]: I1008 08:30:54.543321 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5hgq" podStartSLOduration=1.93167971 podStartE2EDuration="5.543305192s" podCreationTimestamp="2025-10-08 08:30:49 +0000 UTC" firstStartedPulling="2025-10-08 08:30:50.443491419 +0000 UTC m=+6993.573184020" lastFinishedPulling="2025-10-08 08:30:54.055116891 +0000 UTC m=+6997.184809502" observedRunningTime="2025-10-08 08:30:54.533201428 +0000 UTC m=+6997.662894069" watchObservedRunningTime="2025-10-08 08:30:54.543305192 +0000 UTC m=+6997.672997793" Oct 08 08:30:56 crc kubenswrapper[4958]: I1008 08:30:56.531016 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerDied","Data":"d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7"} Oct 08 08:30:56 crc kubenswrapper[4958]: I1008 08:30:56.531021 4958 generic.go:334] "Generic (PLEG): container finished" podID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerID="d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7" exitCode=0 Oct 08 08:30:57 crc kubenswrapper[4958]: I1008 08:30:57.591894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerStarted","Data":"8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f"} Oct 08 08:30:57 crc kubenswrapper[4958]: I1008 08:30:57.615704 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7slbn" podStartSLOduration=2.888337023 podStartE2EDuration="9.615683278s" podCreationTimestamp="2025-10-08 08:30:48 +0000 UTC" firstStartedPulling="2025-10-08 08:30:50.438513163 +0000 UTC m=+6993.568205764" lastFinishedPulling="2025-10-08 08:30:57.165859388 +0000 UTC m=+7000.295552019" observedRunningTime="2025-10-08 08:30:57.610074265 +0000 UTC m=+7000.739766866" watchObservedRunningTime="2025-10-08 08:30:57.615683278 +0000 UTC m=+7000.745375889" Oct 08 08:30:58 crc kubenswrapper[4958]: I1008 08:30:58.919282 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:58 crc kubenswrapper[4958]: I1008 08:30:58.919594 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:30:59 crc kubenswrapper[4958]: I1008 08:30:59.569824 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:59 crc kubenswrapper[4958]: I1008 08:30:59.569878 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:30:59 crc kubenswrapper[4958]: I1008 08:30:59.576843 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:30:59 crc kubenswrapper[4958]: E1008 08:30:59.577294 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:30:59 crc kubenswrapper[4958]: I1008 08:30:59.963404 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7slbn" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="registry-server" probeResult="failure" output=< Oct 08 08:30:59 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:30:59 crc kubenswrapper[4958]: > Oct 08 08:31:00 crc kubenswrapper[4958]: I1008 08:31:00.643168 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d5hgq" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="registry-server" probeResult="failure" output=< Oct 08 08:31:00 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:31:00 crc kubenswrapper[4958]: > Oct 08 08:31:09 crc kubenswrapper[4958]: I1008 08:31:09.632538 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:31:09 crc kubenswrapper[4958]: I1008 08:31:09.701720 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:31:09 crc kubenswrapper[4958]: I1008 08:31:09.876237 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5hgq"] Oct 08 08:31:09 crc kubenswrapper[4958]: I1008 08:31:09.980616 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7slbn" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="registry-server" probeResult="failure" output=< Oct 08 08:31:09 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:31:09 crc kubenswrapper[4958]: > Oct 08 08:31:10 crc kubenswrapper[4958]: I1008 08:31:10.750199 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5hgq" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="registry-server" containerID="cri-o://90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741" gracePeriod=2 Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.238090 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.371567 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfmrw\" (UniqueName: \"kubernetes.io/projected/9d8eef66-4d62-495f-9170-afb85dc7ccc7-kube-api-access-zfmrw\") pod \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.371652 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-catalog-content\") pod \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.371718 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-utilities\") pod \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\" (UID: \"9d8eef66-4d62-495f-9170-afb85dc7ccc7\") " Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.372423 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-utilities" (OuterVolumeSpecName: "utilities") pod "9d8eef66-4d62-495f-9170-afb85dc7ccc7" (UID: "9d8eef66-4d62-495f-9170-afb85dc7ccc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.378113 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8eef66-4d62-495f-9170-afb85dc7ccc7-kube-api-access-zfmrw" (OuterVolumeSpecName: "kube-api-access-zfmrw") pod "9d8eef66-4d62-495f-9170-afb85dc7ccc7" (UID: "9d8eef66-4d62-495f-9170-afb85dc7ccc7"). InnerVolumeSpecName "kube-api-access-zfmrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.417077 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d8eef66-4d62-495f-9170-afb85dc7ccc7" (UID: "9d8eef66-4d62-495f-9170-afb85dc7ccc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.474385 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfmrw\" (UniqueName: \"kubernetes.io/projected/9d8eef66-4d62-495f-9170-afb85dc7ccc7-kube-api-access-zfmrw\") on node \"crc\" DevicePath \"\"" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.474417 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.474429 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8eef66-4d62-495f-9170-afb85dc7ccc7-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.760486 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerID="90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741" exitCode=0 Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.760532 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5hgq" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.760555 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerDied","Data":"90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741"} Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.761381 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5hgq" event={"ID":"9d8eef66-4d62-495f-9170-afb85dc7ccc7","Type":"ContainerDied","Data":"7aba51a6226cd91829a58b44c8755aba7fc2180b90f77ddd35340a64741d27a9"} Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.761436 4958 scope.go:117] "RemoveContainer" containerID="90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.790898 4958 scope.go:117] "RemoveContainer" containerID="bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.803719 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5hgq"] Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.814294 4958 scope.go:117] "RemoveContainer" containerID="94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.816897 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5hgq"] Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.868885 4958 scope.go:117] "RemoveContainer" containerID="90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741" Oct 08 08:31:11 crc kubenswrapper[4958]: E1008 08:31:11.874110 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741\": container with ID starting with 90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741 not found: ID does not exist" containerID="90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.875725 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741"} err="failed to get container status \"90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741\": rpc error: code = NotFound desc = could not find container \"90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741\": container with ID starting with 90a5d9b22bdd4b03f06ba2a741b36856869b409230dba283d952896efeea0741 not found: ID does not exist" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.876380 4958 scope.go:117] "RemoveContainer" containerID="bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e" Oct 08 08:31:11 crc kubenswrapper[4958]: E1008 08:31:11.877158 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e\": container with ID starting with bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e not found: ID does not exist" containerID="bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.877187 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e"} err="failed to get container status \"bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e\": rpc error: code = NotFound desc = could not find container \"bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e\": container with ID starting with bc9ad3dccf26310d85721d252a4e20479eabd0179cfa835d4101b3811b9ad78e not found: ID does not exist" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.877202 4958 scope.go:117] "RemoveContainer" containerID="94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2" Oct 08 08:31:11 crc kubenswrapper[4958]: E1008 08:31:11.877613 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2\": container with ID starting with 94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2 not found: ID does not exist" containerID="94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2" Oct 08 08:31:11 crc kubenswrapper[4958]: I1008 08:31:11.877660 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2"} err="failed to get container status \"94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2\": rpc error: code = NotFound desc = could not find container \"94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2\": container with ID starting with 94e9181c87bebe3ac9bc977fe47b3644b7dbd99999e619857fcada23684540b2 not found: ID does not exist" Oct 08 08:31:13 crc kubenswrapper[4958]: I1008 08:31:13.590396 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" path="/var/lib/kubelet/pods/9d8eef66-4d62-495f-9170-afb85dc7ccc7/volumes" Oct 08 08:31:14 crc kubenswrapper[4958]: I1008 08:31:14.577033 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:31:14 crc kubenswrapper[4958]: E1008 08:31:14.577525 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:31:18 crc kubenswrapper[4958]: I1008 08:31:18.985751 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:31:19 crc kubenswrapper[4958]: I1008 08:31:19.069175 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:31:19 crc kubenswrapper[4958]: I1008 08:31:19.236417 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7slbn"] Oct 08 08:31:20 crc kubenswrapper[4958]: I1008 08:31:20.870308 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7slbn" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="registry-server" containerID="cri-o://8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f" gracePeriod=2 Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.435308 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.537852 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-catalog-content\") pod \"e98e93c7-813e-4a09-bfa7-132a54e13130\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.542195 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-utilities\") pod \"e98e93c7-813e-4a09-bfa7-132a54e13130\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.542343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw2zk\" (UniqueName: \"kubernetes.io/projected/e98e93c7-813e-4a09-bfa7-132a54e13130-kube-api-access-zw2zk\") pod \"e98e93c7-813e-4a09-bfa7-132a54e13130\" (UID: \"e98e93c7-813e-4a09-bfa7-132a54e13130\") " Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.542844 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-utilities" (OuterVolumeSpecName: "utilities") pod "e98e93c7-813e-4a09-bfa7-132a54e13130" (UID: "e98e93c7-813e-4a09-bfa7-132a54e13130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.543488 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.551573 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98e93c7-813e-4a09-bfa7-132a54e13130-kube-api-access-zw2zk" (OuterVolumeSpecName: "kube-api-access-zw2zk") pod "e98e93c7-813e-4a09-bfa7-132a54e13130" (UID: "e98e93c7-813e-4a09-bfa7-132a54e13130"). InnerVolumeSpecName "kube-api-access-zw2zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.629479 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e98e93c7-813e-4a09-bfa7-132a54e13130" (UID: "e98e93c7-813e-4a09-bfa7-132a54e13130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.645551 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98e93c7-813e-4a09-bfa7-132a54e13130-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.645755 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw2zk\" (UniqueName: \"kubernetes.io/projected/e98e93c7-813e-4a09-bfa7-132a54e13130-kube-api-access-zw2zk\") on node \"crc\" DevicePath \"\"" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.884263 4958 generic.go:334] "Generic (PLEG): container finished" podID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerID="8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f" exitCode=0 Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.884357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerDied","Data":"8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f"} Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.884369 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7slbn" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.884723 4958 scope.go:117] "RemoveContainer" containerID="8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.884705 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7slbn" event={"ID":"e98e93c7-813e-4a09-bfa7-132a54e13130","Type":"ContainerDied","Data":"55f1a70cb0a72afb7a45687bc51b2d73d0f78026903c36d28f415b8e54a78959"} Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.950820 4958 scope.go:117] "RemoveContainer" containerID="d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7" Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.979018 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7slbn"] Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.983725 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7slbn"] Oct 08 08:31:21 crc kubenswrapper[4958]: I1008 08:31:21.991150 4958 scope.go:117] "RemoveContainer" containerID="9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8" Oct 08 08:31:22 crc kubenswrapper[4958]: I1008 08:31:22.057725 4958 scope.go:117] "RemoveContainer" containerID="8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f" Oct 08 08:31:22 crc kubenswrapper[4958]: E1008 08:31:22.063368 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f\": container with ID starting with 8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f not found: ID does not exist" containerID="8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f" Oct 08 08:31:22 crc kubenswrapper[4958]: I1008 08:31:22.063484 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f"} err="failed to get container status \"8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f\": rpc error: code = NotFound desc = could not find container \"8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f\": container with ID starting with 8dfe945eacb61a6c7fb4f3dc41c1cf6fabfadbb73d05b9a66675694188a6596f not found: ID does not exist" Oct 08 08:31:22 crc kubenswrapper[4958]: I1008 08:31:22.063542 4958 scope.go:117] "RemoveContainer" containerID="d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7" Oct 08 08:31:22 crc kubenswrapper[4958]: E1008 08:31:22.079700 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7\": container with ID starting with d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7 not found: ID does not exist" containerID="d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7" Oct 08 08:31:22 crc kubenswrapper[4958]: I1008 08:31:22.080055 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7"} err="failed to get container status \"d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7\": rpc error: code = NotFound desc = could not find container \"d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7\": container with ID starting with d53aaa0166d8bd891dbe81ba4df94e5170a76727f65f2cc1dca2af25ee2ac4e7 not found: ID does not exist" Oct 08 08:31:22 crc kubenswrapper[4958]: I1008 08:31:22.080202 4958 scope.go:117] "RemoveContainer" containerID="9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8" Oct 08 08:31:22 crc kubenswrapper[4958]: E1008 08:31:22.081005 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8\": container with ID starting with 9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8 not found: ID does not exist" containerID="9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8" Oct 08 08:31:22 crc kubenswrapper[4958]: I1008 08:31:22.081047 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8"} err="failed to get container status \"9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8\": rpc error: code = NotFound desc = could not find container \"9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8\": container with ID starting with 9ac3a08b960eda78c7b9d7e596cc7b06cbbac331445bfaf64e665109f72e53b8 not found: ID does not exist" Oct 08 08:31:23 crc kubenswrapper[4958]: I1008 08:31:23.598519 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" path="/var/lib/kubelet/pods/e98e93c7-813e-4a09-bfa7-132a54e13130/volumes" Oct 08 08:31:24 crc kubenswrapper[4958]: I1008 08:31:24.064507 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-6g59r"] Oct 08 08:31:24 crc kubenswrapper[4958]: I1008 08:31:24.071333 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-6g59r"] Oct 08 08:31:25 crc kubenswrapper[4958]: I1008 08:31:25.603615 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b801665-c4dc-4c61-8223-da6d1f82885d" path="/var/lib/kubelet/pods/4b801665-c4dc-4c61-8223-da6d1f82885d/volumes" Oct 08 08:31:28 crc kubenswrapper[4958]: I1008 08:31:28.577871 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:31:28 crc kubenswrapper[4958]: E1008 08:31:28.578756 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:31:34 crc kubenswrapper[4958]: I1008 08:31:34.047408 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-7e9b-account-create-6mx4r"] Oct 08 08:31:34 crc kubenswrapper[4958]: I1008 08:31:34.065846 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-7e9b-account-create-6mx4r"] Oct 08 08:31:35 crc kubenswrapper[4958]: I1008 08:31:35.598861 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3102efb5-c30c-441d-bad2-0822dbedad36" path="/var/lib/kubelet/pods/3102efb5-c30c-441d-bad2-0822dbedad36/volumes" Oct 08 08:31:41 crc kubenswrapper[4958]: I1008 08:31:41.577280 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:31:42 crc kubenswrapper[4958]: I1008 08:31:42.159039 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"192038b921bc86bc768a67cd8ac64f4570d5383fbee44461fb89308cbf0ee3f2"} Oct 08 08:31:47 crc kubenswrapper[4958]: I1008 08:31:47.662016 4958 scope.go:117] "RemoveContainer" containerID="d4f132119f42015ad30417916fa9bcfa4961b4c66e9e312785837c3bfa3fa46f" Oct 08 08:31:47 crc kubenswrapper[4958]: I1008 08:31:47.699967 4958 scope.go:117] "RemoveContainer" containerID="070c65c827915557ff4cabe953ccaf0aaebd19cc15ed8e920bce37cb6c91105c" Oct 08 08:31:48 crc kubenswrapper[4958]: I1008 08:31:48.059072 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-t4nvc"] Oct 08 08:31:48 crc kubenswrapper[4958]: I1008 08:31:48.073660 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-t4nvc"] Oct 08 08:31:49 crc kubenswrapper[4958]: I1008 08:31:49.610038 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1771a4ba-784c-41a1-b1e3-38ffc2e28ddd" path="/var/lib/kubelet/pods/1771a4ba-784c-41a1-b1e3-38ffc2e28ddd/volumes" Oct 08 08:32:47 crc kubenswrapper[4958]: I1008 08:32:47.881073 4958 scope.go:117] "RemoveContainer" containerID="c26276d0fcda0eefc38418a6b22a1872f9163230b3e475a460b82ee0c1e90a9e" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.050784 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-hqsx7"] Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.063385 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-hqsx7"] Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.813608 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrqb"] Oct 08 08:34:04 crc kubenswrapper[4958]: E1008 08:34:04.814253 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="registry-server" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814281 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="registry-server" Oct 08 08:34:04 crc kubenswrapper[4958]: E1008 08:34:04.814312 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="extract-content" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814327 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="extract-content" Oct 08 08:34:04 crc kubenswrapper[4958]: E1008 08:34:04.814354 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="registry-server" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814362 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="registry-server" Oct 08 08:34:04 crc kubenswrapper[4958]: E1008 08:34:04.814396 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="extract-utilities" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814404 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="extract-utilities" Oct 08 08:34:04 crc kubenswrapper[4958]: E1008 08:34:04.814414 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="extract-content" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814421 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="extract-content" Oct 08 08:34:04 crc kubenswrapper[4958]: E1008 08:34:04.814443 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="extract-utilities" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814451 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="extract-utilities" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814722 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98e93c7-813e-4a09-bfa7-132a54e13130" containerName="registry-server" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.814776 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8eef66-4d62-495f-9170-afb85dc7ccc7" containerName="registry-server" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.818519 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.844370 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrqb"] Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.920474 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7h8v\" (UniqueName: \"kubernetes.io/projected/5bee7cc0-febf-4dbc-b666-28dded1685e3-kube-api-access-m7h8v\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.920743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-utilities\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:04 crc kubenswrapper[4958]: I1008 08:34:04.920922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-catalog-content\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.022986 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-catalog-content\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.023132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7h8v\" (UniqueName: \"kubernetes.io/projected/5bee7cc0-febf-4dbc-b666-28dded1685e3-kube-api-access-m7h8v\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.023301 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-utilities\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.024201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-utilities\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.024663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-catalog-content\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.049468 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7h8v\" (UniqueName: \"kubernetes.io/projected/5bee7cc0-febf-4dbc-b666-28dded1685e3-kube-api-access-m7h8v\") pod \"redhat-marketplace-fdrqb\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.149395 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.601744 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e9bdef-307a-4761-953f-16b2db2a4496" path="/var/lib/kubelet/pods/80e9bdef-307a-4761-953f-16b2db2a4496/volumes" Oct 08 08:34:05 crc kubenswrapper[4958]: I1008 08:34:05.628064 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrqb"] Oct 08 08:34:06 crc kubenswrapper[4958]: I1008 08:34:06.028414 4958 generic.go:334] "Generic (PLEG): container finished" podID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerID="10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823" exitCode=0 Oct 08 08:34:06 crc kubenswrapper[4958]: I1008 08:34:06.028676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerDied","Data":"10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823"} Oct 08 08:34:06 crc kubenswrapper[4958]: I1008 08:34:06.028701 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerStarted","Data":"d3039cafc6cd60c9d0ef9c8f9940432e73ae5a57f78b2414889836966e2d6d79"} Oct 08 08:34:06 crc kubenswrapper[4958]: I1008 08:34:06.845836 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:34:06 crc kubenswrapper[4958]: I1008 08:34:06.846237 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:34:07 crc kubenswrapper[4958]: I1008 08:34:07.039827 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerStarted","Data":"fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371"} Oct 08 08:34:08 crc kubenswrapper[4958]: I1008 08:34:08.053027 4958 generic.go:334] "Generic (PLEG): container finished" podID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerID="fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371" exitCode=0 Oct 08 08:34:08 crc kubenswrapper[4958]: I1008 08:34:08.053100 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerDied","Data":"fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371"} Oct 08 08:34:09 crc kubenswrapper[4958]: I1008 08:34:09.082332 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerStarted","Data":"5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4"} Oct 08 08:34:09 crc kubenswrapper[4958]: I1008 08:34:09.103591 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fdrqb" podStartSLOduration=2.659028141 podStartE2EDuration="5.103566934s" podCreationTimestamp="2025-10-08 08:34:04 +0000 UTC" firstStartedPulling="2025-10-08 08:34:06.030509621 +0000 UTC m=+7189.160202222" lastFinishedPulling="2025-10-08 08:34:08.475048384 +0000 UTC m=+7191.604741015" observedRunningTime="2025-10-08 08:34:09.103206495 +0000 UTC m=+7192.232899096" watchObservedRunningTime="2025-10-08 08:34:09.103566934 +0000 UTC m=+7192.233259535" Oct 08 08:34:14 crc kubenswrapper[4958]: I1008 08:34:14.046057 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-fe9d-account-create-g6jr9"] Oct 08 08:34:14 crc kubenswrapper[4958]: I1008 08:34:14.059167 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-fe9d-account-create-g6jr9"] Oct 08 08:34:15 crc kubenswrapper[4958]: I1008 08:34:15.151109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:15 crc kubenswrapper[4958]: I1008 08:34:15.151535 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:15 crc kubenswrapper[4958]: I1008 08:34:15.222192 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:15 crc kubenswrapper[4958]: I1008 08:34:15.609674 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad967a9-d897-4751-abbd-28a1c396efea" path="/var/lib/kubelet/pods/3ad967a9-d897-4751-abbd-28a1c396efea/volumes" Oct 08 08:34:16 crc kubenswrapper[4958]: I1008 08:34:16.242302 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:16 crc kubenswrapper[4958]: I1008 08:34:16.323786 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrqb"] Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.184323 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fdrqb" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="registry-server" containerID="cri-o://5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4" gracePeriod=2 Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.795829 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.858031 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7h8v\" (UniqueName: \"kubernetes.io/projected/5bee7cc0-febf-4dbc-b666-28dded1685e3-kube-api-access-m7h8v\") pod \"5bee7cc0-febf-4dbc-b666-28dded1685e3\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.858211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-utilities\") pod \"5bee7cc0-febf-4dbc-b666-28dded1685e3\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.858485 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-catalog-content\") pod \"5bee7cc0-febf-4dbc-b666-28dded1685e3\" (UID: \"5bee7cc0-febf-4dbc-b666-28dded1685e3\") " Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.859471 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-utilities" (OuterVolumeSpecName: "utilities") pod "5bee7cc0-febf-4dbc-b666-28dded1685e3" (UID: "5bee7cc0-febf-4dbc-b666-28dded1685e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.861209 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.864168 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bee7cc0-febf-4dbc-b666-28dded1685e3-kube-api-access-m7h8v" (OuterVolumeSpecName: "kube-api-access-m7h8v") pod "5bee7cc0-febf-4dbc-b666-28dded1685e3" (UID: "5bee7cc0-febf-4dbc-b666-28dded1685e3"). InnerVolumeSpecName "kube-api-access-m7h8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.884674 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bee7cc0-febf-4dbc-b666-28dded1685e3" (UID: "5bee7cc0-febf-4dbc-b666-28dded1685e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.963070 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7h8v\" (UniqueName: \"kubernetes.io/projected/5bee7cc0-febf-4dbc-b666-28dded1685e3-kube-api-access-m7h8v\") on node \"crc\" DevicePath \"\"" Oct 08 08:34:18 crc kubenswrapper[4958]: I1008 08:34:18.963109 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee7cc0-febf-4dbc-b666-28dded1685e3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.196307 4958 generic.go:334] "Generic (PLEG): container finished" podID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerID="5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4" exitCode=0 Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.196372 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrqb" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.196368 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerDied","Data":"5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4"} Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.197397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrqb" event={"ID":"5bee7cc0-febf-4dbc-b666-28dded1685e3","Type":"ContainerDied","Data":"d3039cafc6cd60c9d0ef9c8f9940432e73ae5a57f78b2414889836966e2d6d79"} Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.197424 4958 scope.go:117] "RemoveContainer" containerID="5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.236929 4958 scope.go:117] "RemoveContainer" containerID="fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.240758 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrqb"] Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.254570 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrqb"] Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.270405 4958 scope.go:117] "RemoveContainer" containerID="10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.313406 4958 scope.go:117] "RemoveContainer" containerID="5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4" Oct 08 08:34:19 crc kubenswrapper[4958]: E1008 08:34:19.314668 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4\": container with ID starting with 5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4 not found: ID does not exist" containerID="5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.314708 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4"} err="failed to get container status \"5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4\": rpc error: code = NotFound desc = could not find container \"5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4\": container with ID starting with 5ba4d2681409713178dab1ba324cfd477c51d30f522a457001273a6bce156df4 not found: ID does not exist" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.314734 4958 scope.go:117] "RemoveContainer" containerID="fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371" Oct 08 08:34:19 crc kubenswrapper[4958]: E1008 08:34:19.315054 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371\": container with ID starting with fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371 not found: ID does not exist" containerID="fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.315080 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371"} err="failed to get container status \"fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371\": rpc error: code = NotFound desc = could not find container \"fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371\": container with ID starting with fa9c7de00a44a6b39ee9b45664008719de274bb4b65e474e62c61de3bdb32371 not found: ID does not exist" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.315096 4958 scope.go:117] "RemoveContainer" containerID="10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823" Oct 08 08:34:19 crc kubenswrapper[4958]: E1008 08:34:19.315344 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823\": container with ID starting with 10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823 not found: ID does not exist" containerID="10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.315368 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823"} err="failed to get container status \"10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823\": rpc error: code = NotFound desc = could not find container \"10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823\": container with ID starting with 10fa4108fb2b13697730c2fd9189d2afe98423bad5c5c3df009f75c98a6e4823 not found: ID does not exist" Oct 08 08:34:19 crc kubenswrapper[4958]: E1008 08:34:19.406830 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bee7cc0_febf_4dbc_b666_28dded1685e3.slice\": RecentStats: unable to find data in memory cache]" Oct 08 08:34:19 crc kubenswrapper[4958]: I1008 08:34:19.587838 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" path="/var/lib/kubelet/pods/5bee7cc0-febf-4dbc-b666-28dded1685e3/volumes" Oct 08 08:34:26 crc kubenswrapper[4958]: I1008 08:34:26.042348 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-s5dh6"] Oct 08 08:34:26 crc kubenswrapper[4958]: I1008 08:34:26.055322 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-s5dh6"] Oct 08 08:34:27 crc kubenswrapper[4958]: I1008 08:34:27.594668 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85" path="/var/lib/kubelet/pods/8f2a7258-4c60-4ec5-ab90-5bfbfbe2bc85/volumes" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.258891 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bxnrb"] Oct 08 08:34:29 crc kubenswrapper[4958]: E1008 08:34:29.259863 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="extract-utilities" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.259883 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="extract-utilities" Oct 08 08:34:29 crc kubenswrapper[4958]: E1008 08:34:29.259938 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="registry-server" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.259956 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="registry-server" Oct 08 08:34:29 crc kubenswrapper[4958]: E1008 08:34:29.260002 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="extract-content" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.260015 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="extract-content" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.260408 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bee7cc0-febf-4dbc-b666-28dded1685e3" containerName="registry-server" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.263062 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.291926 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxnrb"] Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.416066 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-catalog-content\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.416141 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndkl\" (UniqueName: \"kubernetes.io/projected/85d65739-e6c9-4c20-87dc-f088eee195a2-kube-api-access-kndkl\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.416312 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-utilities\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.517879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-catalog-content\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.518278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kndkl\" (UniqueName: \"kubernetes.io/projected/85d65739-e6c9-4c20-87dc-f088eee195a2-kube-api-access-kndkl\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.518357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-utilities\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.518490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-catalog-content\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.518893 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-utilities\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.544959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndkl\" (UniqueName: \"kubernetes.io/projected/85d65739-e6c9-4c20-87dc-f088eee195a2-kube-api-access-kndkl\") pod \"certified-operators-bxnrb\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:29 crc kubenswrapper[4958]: I1008 08:34:29.611508 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:30 crc kubenswrapper[4958]: I1008 08:34:30.139948 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxnrb"] Oct 08 08:34:30 crc kubenswrapper[4958]: I1008 08:34:30.337403 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerStarted","Data":"4a93904fc20a96e260b9692b5ecec0d9f8c4522c8e61a94315f03b310dbb3053"} Oct 08 08:34:31 crc kubenswrapper[4958]: I1008 08:34:31.350791 4958 generic.go:334] "Generic (PLEG): container finished" podID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerID="3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b" exitCode=0 Oct 08 08:34:31 crc kubenswrapper[4958]: I1008 08:34:31.350882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerDied","Data":"3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b"} Oct 08 08:34:32 crc kubenswrapper[4958]: I1008 08:34:32.402222 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerStarted","Data":"2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0"} Oct 08 08:34:34 crc kubenswrapper[4958]: I1008 08:34:34.424684 4958 generic.go:334] "Generic (PLEG): container finished" podID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerID="2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0" exitCode=0 Oct 08 08:34:34 crc kubenswrapper[4958]: I1008 08:34:34.424749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerDied","Data":"2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0"} Oct 08 08:34:35 crc kubenswrapper[4958]: I1008 08:34:35.439925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerStarted","Data":"ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f"} Oct 08 08:34:35 crc kubenswrapper[4958]: I1008 08:34:35.469776 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bxnrb" podStartSLOduration=2.991440513 podStartE2EDuration="6.469755477s" podCreationTimestamp="2025-10-08 08:34:29 +0000 UTC" firstStartedPulling="2025-10-08 08:34:31.353718494 +0000 UTC m=+7214.483411095" lastFinishedPulling="2025-10-08 08:34:34.832033448 +0000 UTC m=+7217.961726059" observedRunningTime="2025-10-08 08:34:35.459015716 +0000 UTC m=+7218.588708337" watchObservedRunningTime="2025-10-08 08:34:35.469755477 +0000 UTC m=+7218.599448078" Oct 08 08:34:36 crc kubenswrapper[4958]: I1008 08:34:36.844509 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:34:36 crc kubenswrapper[4958]: I1008 08:34:36.844884 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:34:39 crc kubenswrapper[4958]: I1008 08:34:39.617491 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:39 crc kubenswrapper[4958]: I1008 08:34:39.618730 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:39 crc kubenswrapper[4958]: I1008 08:34:39.680697 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:40 crc kubenswrapper[4958]: I1008 08:34:40.561051 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:41 crc kubenswrapper[4958]: I1008 08:34:41.244799 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxnrb"] Oct 08 08:34:42 crc kubenswrapper[4958]: I1008 08:34:42.526428 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bxnrb" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="registry-server" containerID="cri-o://ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f" gracePeriod=2 Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.053548 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.182662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-utilities\") pod \"85d65739-e6c9-4c20-87dc-f088eee195a2\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.182746 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kndkl\" (UniqueName: \"kubernetes.io/projected/85d65739-e6c9-4c20-87dc-f088eee195a2-kube-api-access-kndkl\") pod \"85d65739-e6c9-4c20-87dc-f088eee195a2\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.183026 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-catalog-content\") pod \"85d65739-e6c9-4c20-87dc-f088eee195a2\" (UID: \"85d65739-e6c9-4c20-87dc-f088eee195a2\") " Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.183587 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-utilities" (OuterVolumeSpecName: "utilities") pod "85d65739-e6c9-4c20-87dc-f088eee195a2" (UID: "85d65739-e6c9-4c20-87dc-f088eee195a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.198364 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d65739-e6c9-4c20-87dc-f088eee195a2-kube-api-access-kndkl" (OuterVolumeSpecName: "kube-api-access-kndkl") pod "85d65739-e6c9-4c20-87dc-f088eee195a2" (UID: "85d65739-e6c9-4c20-87dc-f088eee195a2"). InnerVolumeSpecName "kube-api-access-kndkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.238079 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85d65739-e6c9-4c20-87dc-f088eee195a2" (UID: "85d65739-e6c9-4c20-87dc-f088eee195a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.285229 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.285264 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d65739-e6c9-4c20-87dc-f088eee195a2-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.285274 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kndkl\" (UniqueName: \"kubernetes.io/projected/85d65739-e6c9-4c20-87dc-f088eee195a2-kube-api-access-kndkl\") on node \"crc\" DevicePath \"\"" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.539813 4958 generic.go:334] "Generic (PLEG): container finished" podID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerID="ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f" exitCode=0 Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.539858 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerDied","Data":"ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f"} Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.539891 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxnrb" event={"ID":"85d65739-e6c9-4c20-87dc-f088eee195a2","Type":"ContainerDied","Data":"4a93904fc20a96e260b9692b5ecec0d9f8c4522c8e61a94315f03b310dbb3053"} Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.539913 4958 scope.go:117] "RemoveContainer" containerID="ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.539980 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxnrb" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.581690 4958 scope.go:117] "RemoveContainer" containerID="2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.595378 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxnrb"] Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.596098 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bxnrb"] Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.609739 4958 scope.go:117] "RemoveContainer" containerID="3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.681898 4958 scope.go:117] "RemoveContainer" containerID="ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f" Oct 08 08:34:43 crc kubenswrapper[4958]: E1008 08:34:43.682364 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f\": container with ID starting with ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f not found: ID does not exist" containerID="ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.682409 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f"} err="failed to get container status \"ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f\": rpc error: code = NotFound desc = could not find container \"ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f\": container with ID starting with ccaf9a7fd271b71c1c30ae2ebd266bb071ac8cb6376e3a65feff0da1ecb0c62f not found: ID does not exist" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.682455 4958 scope.go:117] "RemoveContainer" containerID="2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0" Oct 08 08:34:43 crc kubenswrapper[4958]: E1008 08:34:43.682734 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0\": container with ID starting with 2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0 not found: ID does not exist" containerID="2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.682762 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0"} err="failed to get container status \"2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0\": rpc error: code = NotFound desc = could not find container \"2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0\": container with ID starting with 2e4c13ab1c2d3ffa148ed5555d16fbc3bd2f92f8c971a6b29cc6370742b55ef0 not found: ID does not exist" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.682800 4958 scope.go:117] "RemoveContainer" containerID="3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b" Oct 08 08:34:43 crc kubenswrapper[4958]: E1008 08:34:43.683032 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b\": container with ID starting with 3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b not found: ID does not exist" containerID="3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b" Oct 08 08:34:43 crc kubenswrapper[4958]: I1008 08:34:43.683078 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b"} err="failed to get container status \"3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b\": rpc error: code = NotFound desc = could not find container \"3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b\": container with ID starting with 3aae8d49a5ef5a2f44893fd4f250852fed448b311f30b9bc242f78af93107c4b not found: ID does not exist" Oct 08 08:34:45 crc kubenswrapper[4958]: I1008 08:34:45.594777 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" path="/var/lib/kubelet/pods/85d65739-e6c9-4c20-87dc-f088eee195a2/volumes" Oct 08 08:34:48 crc kubenswrapper[4958]: I1008 08:34:48.012008 4958 scope.go:117] "RemoveContainer" containerID="47c147e6b9954b488dbf4cecb978a53db3fe9fbda61df673e5dcd3713d4ab2eb" Oct 08 08:34:48 crc kubenswrapper[4958]: I1008 08:34:48.064757 4958 scope.go:117] "RemoveContainer" containerID="c484b4f80e35e503661d5dbaffd41bcdb3db95a9f73d2169a8e74860449a266b" Oct 08 08:34:48 crc kubenswrapper[4958]: I1008 08:34:48.132194 4958 scope.go:117] "RemoveContainer" containerID="e16b874155330c9cc82399413d1abd65123a2ab23ed87ab7fcd42dfb9067da3a" Oct 08 08:35:06 crc kubenswrapper[4958]: I1008 08:35:06.845375 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:35:06 crc kubenswrapper[4958]: I1008 08:35:06.846006 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:35:06 crc kubenswrapper[4958]: I1008 08:35:06.846057 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:35:06 crc kubenswrapper[4958]: I1008 08:35:06.846956 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"192038b921bc86bc768a67cd8ac64f4570d5383fbee44461fb89308cbf0ee3f2"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:35:06 crc kubenswrapper[4958]: I1008 08:35:06.847034 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://192038b921bc86bc768a67cd8ac64f4570d5383fbee44461fb89308cbf0ee3f2" gracePeriod=600 Oct 08 08:35:07 crc kubenswrapper[4958]: I1008 08:35:07.795413 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="192038b921bc86bc768a67cd8ac64f4570d5383fbee44461fb89308cbf0ee3f2" exitCode=0 Oct 08 08:35:07 crc kubenswrapper[4958]: I1008 08:35:07.795466 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"192038b921bc86bc768a67cd8ac64f4570d5383fbee44461fb89308cbf0ee3f2"} Oct 08 08:35:07 crc kubenswrapper[4958]: I1008 08:35:07.796113 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94"} Oct 08 08:35:07 crc kubenswrapper[4958]: I1008 08:35:07.796141 4958 scope.go:117] "RemoveContainer" containerID="3b255e8d4feeec06e5c0a9b98fd8f0e6cfebde99fb29d992a4ee84d0a6e5b342" Oct 08 08:37:36 crc kubenswrapper[4958]: I1008 08:37:36.845545 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:37:36 crc kubenswrapper[4958]: I1008 08:37:36.848436 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:38:05 crc kubenswrapper[4958]: I1008 08:38:05.986135 4958 generic.go:334] "Generic (PLEG): container finished" podID="5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" containerID="0a679e4b647d138299f4a9f7ba30dccbdaab4dd58a494512d9825a90db2d3e84" exitCode=0 Oct 08 08:38:05 crc kubenswrapper[4958]: I1008 08:38:05.986259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" event={"ID":"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b","Type":"ContainerDied","Data":"0a679e4b647d138299f4a9f7ba30dccbdaab4dd58a494512d9825a90db2d3e84"} Oct 08 08:38:06 crc kubenswrapper[4958]: I1008 08:38:06.845339 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:38:06 crc kubenswrapper[4958]: I1008 08:38:06.845745 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.466397 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.648702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqt5p\" (UniqueName: \"kubernetes.io/projected/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-kube-api-access-rqt5p\") pod \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.649244 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-ssh-key\") pod \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.649320 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-inventory\") pod \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.649428 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-tripleo-cleanup-combined-ca-bundle\") pod \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\" (UID: \"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b\") " Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.657358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-kube-api-access-rqt5p" (OuterVolumeSpecName: "kube-api-access-rqt5p") pod "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" (UID: "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b"). InnerVolumeSpecName "kube-api-access-rqt5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.658286 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" (UID: "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.684343 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-inventory" (OuterVolumeSpecName: "inventory") pod "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" (UID: "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.695943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" (UID: "5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.753424 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqt5p\" (UniqueName: \"kubernetes.io/projected/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-kube-api-access-rqt5p\") on node \"crc\" DevicePath \"\"" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.754150 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.754276 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:38:07 crc kubenswrapper[4958]: I1008 08:38:07.754371 4958 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:38:08 crc kubenswrapper[4958]: I1008 08:38:08.011822 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" event={"ID":"5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b","Type":"ContainerDied","Data":"1983eb61a58df7568132ab06570222a620d716fc12642210961234831294a30d"} Oct 08 08:38:08 crc kubenswrapper[4958]: I1008 08:38:08.012196 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1983eb61a58df7568132ab06570222a620d716fc12642210961234831294a30d" Oct 08 08:38:08 crc kubenswrapper[4958]: I1008 08:38:08.011931 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.642274 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-8bxd8"] Oct 08 08:38:16 crc kubenswrapper[4958]: E1008 08:38:16.643318 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.643332 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 08 08:38:16 crc kubenswrapper[4958]: E1008 08:38:16.643344 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="extract-content" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.643350 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="extract-content" Oct 08 08:38:16 crc kubenswrapper[4958]: E1008 08:38:16.643393 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="registry-server" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.643399 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="registry-server" Oct 08 08:38:16 crc kubenswrapper[4958]: E1008 08:38:16.643416 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="extract-utilities" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.643422 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="extract-utilities" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.643614 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d65739-e6c9-4c20-87dc-f088eee195a2" containerName="registry-server" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.643645 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.644359 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.648181 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.648291 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.648181 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.649556 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-8bxd8"] Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.651756 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.789930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.790346 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-inventory\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.790555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjc6t\" (UniqueName: \"kubernetes.io/projected/3d9330f6-7990-462e-b85f-946c1255f76c-kube-api-access-bjc6t\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.790657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.892264 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjc6t\" (UniqueName: \"kubernetes.io/projected/3d9330f6-7990-462e-b85f-946c1255f76c-kube-api-access-bjc6t\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.892319 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.892380 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.892414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-inventory\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.901433 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-inventory\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.906501 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.910538 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.912912 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjc6t\" (UniqueName: \"kubernetes.io/projected/3d9330f6-7990-462e-b85f-946c1255f76c-kube-api-access-bjc6t\") pod \"bootstrap-openstack-openstack-cell1-8bxd8\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:16 crc kubenswrapper[4958]: I1008 08:38:16.982371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:38:18 crc kubenswrapper[4958]: I1008 08:38:18.191399 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:38:18 crc kubenswrapper[4958]: I1008 08:38:18.198483 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-8bxd8"] Oct 08 08:38:18 crc kubenswrapper[4958]: I1008 08:38:18.947473 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:38:19 crc kubenswrapper[4958]: I1008 08:38:19.152338 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" event={"ID":"3d9330f6-7990-462e-b85f-946c1255f76c","Type":"ContainerStarted","Data":"f709091881c282863bfb37167044b575149087fc3124007a9dff06422eee4d9f"} Oct 08 08:38:20 crc kubenswrapper[4958]: I1008 08:38:20.162449 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" event={"ID":"3d9330f6-7990-462e-b85f-946c1255f76c","Type":"ContainerStarted","Data":"cb4952b22f40c09f1bea9a7c706d1ec4541a5785ca525850bc96619dbfa00630"} Oct 08 08:38:20 crc kubenswrapper[4958]: I1008 08:38:20.190496 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" podStartSLOduration=3.43921549 podStartE2EDuration="4.190468932s" podCreationTimestamp="2025-10-08 08:38:16 +0000 UTC" firstStartedPulling="2025-10-08 08:38:18.191153393 +0000 UTC m=+7441.320845994" lastFinishedPulling="2025-10-08 08:38:18.942406795 +0000 UTC m=+7442.072099436" observedRunningTime="2025-10-08 08:38:20.181425546 +0000 UTC m=+7443.311118167" watchObservedRunningTime="2025-10-08 08:38:20.190468932 +0000 UTC m=+7443.320161573" Oct 08 08:38:36 crc kubenswrapper[4958]: I1008 08:38:36.845542 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:38:36 crc kubenswrapper[4958]: I1008 08:38:36.846243 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:38:36 crc kubenswrapper[4958]: I1008 08:38:36.846294 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:38:36 crc kubenswrapper[4958]: I1008 08:38:36.846914 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:38:36 crc kubenswrapper[4958]: I1008 08:38:36.847072 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" gracePeriod=600 Oct 08 08:38:36 crc kubenswrapper[4958]: E1008 08:38:36.981342 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:38:37 crc kubenswrapper[4958]: I1008 08:38:37.400402 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" exitCode=0 Oct 08 08:38:37 crc kubenswrapper[4958]: I1008 08:38:37.400457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94"} Oct 08 08:38:37 crc kubenswrapper[4958]: I1008 08:38:37.400500 4958 scope.go:117] "RemoveContainer" containerID="192038b921bc86bc768a67cd8ac64f4570d5383fbee44461fb89308cbf0ee3f2" Oct 08 08:38:37 crc kubenswrapper[4958]: I1008 08:38:37.402934 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:38:37 crc kubenswrapper[4958]: E1008 08:38:37.406190 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:38:48 crc kubenswrapper[4958]: I1008 08:38:48.577127 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:38:48 crc kubenswrapper[4958]: E1008 08:38:48.578035 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:39:00 crc kubenswrapper[4958]: I1008 08:39:00.576885 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:39:00 crc kubenswrapper[4958]: E1008 08:39:00.578407 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:39:15 crc kubenswrapper[4958]: I1008 08:39:15.579596 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:39:15 crc kubenswrapper[4958]: E1008 08:39:15.582387 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:39:28 crc kubenswrapper[4958]: I1008 08:39:28.577227 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:39:28 crc kubenswrapper[4958]: E1008 08:39:28.578246 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:39:41 crc kubenswrapper[4958]: I1008 08:39:41.577704 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:39:41 crc kubenswrapper[4958]: E1008 08:39:41.579201 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:39:53 crc kubenswrapper[4958]: I1008 08:39:53.577152 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:39:53 crc kubenswrapper[4958]: E1008 08:39:53.578476 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:40:05 crc kubenswrapper[4958]: I1008 08:40:05.576812 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:40:05 crc kubenswrapper[4958]: E1008 08:40:05.577648 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:40:16 crc kubenswrapper[4958]: I1008 08:40:16.577149 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:40:16 crc kubenswrapper[4958]: E1008 08:40:16.577961 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:40:30 crc kubenswrapper[4958]: I1008 08:40:30.576900 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:40:30 crc kubenswrapper[4958]: E1008 08:40:30.578093 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:40:41 crc kubenswrapper[4958]: I1008 08:40:41.578018 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:40:41 crc kubenswrapper[4958]: E1008 08:40:41.579030 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:40:52 crc kubenswrapper[4958]: I1008 08:40:52.579168 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:40:52 crc kubenswrapper[4958]: E1008 08:40:52.580011 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:41:06 crc kubenswrapper[4958]: I1008 08:41:06.577113 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:41:06 crc kubenswrapper[4958]: E1008 08:41:06.578248 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:41:17 crc kubenswrapper[4958]: I1008 08:41:17.586467 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:41:17 crc kubenswrapper[4958]: E1008 08:41:17.587204 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:41:28 crc kubenswrapper[4958]: I1008 08:41:28.578647 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:41:28 crc kubenswrapper[4958]: E1008 08:41:28.583497 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:41:30 crc kubenswrapper[4958]: I1008 08:41:30.487423 4958 generic.go:334] "Generic (PLEG): container finished" podID="3d9330f6-7990-462e-b85f-946c1255f76c" containerID="cb4952b22f40c09f1bea9a7c706d1ec4541a5785ca525850bc96619dbfa00630" exitCode=0 Oct 08 08:41:30 crc kubenswrapper[4958]: I1008 08:41:30.487520 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" event={"ID":"3d9330f6-7990-462e-b85f-946c1255f76c","Type":"ContainerDied","Data":"cb4952b22f40c09f1bea9a7c706d1ec4541a5785ca525850bc96619dbfa00630"} Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.007629 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.038120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjc6t\" (UniqueName: \"kubernetes.io/projected/3d9330f6-7990-462e-b85f-946c1255f76c-kube-api-access-bjc6t\") pod \"3d9330f6-7990-462e-b85f-946c1255f76c\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.038222 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-inventory\") pod \"3d9330f6-7990-462e-b85f-946c1255f76c\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.038518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-ssh-key\") pod \"3d9330f6-7990-462e-b85f-946c1255f76c\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.038604 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-bootstrap-combined-ca-bundle\") pod \"3d9330f6-7990-462e-b85f-946c1255f76c\" (UID: \"3d9330f6-7990-462e-b85f-946c1255f76c\") " Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.047110 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9330f6-7990-462e-b85f-946c1255f76c-kube-api-access-bjc6t" (OuterVolumeSpecName: "kube-api-access-bjc6t") pod "3d9330f6-7990-462e-b85f-946c1255f76c" (UID: "3d9330f6-7990-462e-b85f-946c1255f76c"). InnerVolumeSpecName "kube-api-access-bjc6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.058231 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3d9330f6-7990-462e-b85f-946c1255f76c" (UID: "3d9330f6-7990-462e-b85f-946c1255f76c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.088971 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3d9330f6-7990-462e-b85f-946c1255f76c" (UID: "3d9330f6-7990-462e-b85f-946c1255f76c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.092136 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-inventory" (OuterVolumeSpecName: "inventory") pod "3d9330f6-7990-462e-b85f-946c1255f76c" (UID: "3d9330f6-7990-462e-b85f-946c1255f76c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.141856 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjc6t\" (UniqueName: \"kubernetes.io/projected/3d9330f6-7990-462e-b85f-946c1255f76c-kube-api-access-bjc6t\") on node \"crc\" DevicePath \"\"" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.141900 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.141915 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.141929 4958 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9330f6-7990-462e-b85f-946c1255f76c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.539705 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" event={"ID":"3d9330f6-7990-462e-b85f-946c1255f76c","Type":"ContainerDied","Data":"f709091881c282863bfb37167044b575149087fc3124007a9dff06422eee4d9f"} Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.539769 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f709091881c282863bfb37167044b575149087fc3124007a9dff06422eee4d9f" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.539857 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-8bxd8" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.679651 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-fthbc"] Oct 08 08:41:32 crc kubenswrapper[4958]: E1008 08:41:32.680331 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9330f6-7990-462e-b85f-946c1255f76c" containerName="bootstrap-openstack-openstack-cell1" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.680351 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9330f6-7990-462e-b85f-946c1255f76c" containerName="bootstrap-openstack-openstack-cell1" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.680570 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9330f6-7990-462e-b85f-946c1255f76c" containerName="bootstrap-openstack-openstack-cell1" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.681465 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.688119 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.688289 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.688287 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.688475 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.698685 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-fthbc"] Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.759748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-inventory\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.759969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-ssh-key\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.760057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgfc\" (UniqueName: \"kubernetes.io/projected/19cd175f-7673-40a6-96d9-c34c47158241-kube-api-access-hlgfc\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.861021 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-inventory\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.861156 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-ssh-key\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.861213 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgfc\" (UniqueName: \"kubernetes.io/projected/19cd175f-7673-40a6-96d9-c34c47158241-kube-api-access-hlgfc\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.872450 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-inventory\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.872891 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-ssh-key\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:32 crc kubenswrapper[4958]: I1008 08:41:32.877847 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgfc\" (UniqueName: \"kubernetes.io/projected/19cd175f-7673-40a6-96d9-c34c47158241-kube-api-access-hlgfc\") pod \"download-cache-openstack-openstack-cell1-fthbc\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:33 crc kubenswrapper[4958]: I1008 08:41:33.022779 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:41:33 crc kubenswrapper[4958]: I1008 08:41:33.652675 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-fthbc"] Oct 08 08:41:34 crc kubenswrapper[4958]: I1008 08:41:34.561859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" event={"ID":"19cd175f-7673-40a6-96d9-c34c47158241","Type":"ContainerStarted","Data":"0aa57bf372d9f8e3c14454cc70358305fc8dbd6fd8a9f6b5a78681b741d979c5"} Oct 08 08:41:34 crc kubenswrapper[4958]: I1008 08:41:34.562425 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" event={"ID":"19cd175f-7673-40a6-96d9-c34c47158241","Type":"ContainerStarted","Data":"f36cf1bd4e38972066428cde09f490060066276620d29675a61cee931de4c9f8"} Oct 08 08:41:34 crc kubenswrapper[4958]: I1008 08:41:34.590431 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" podStartSLOduration=2.087356338 podStartE2EDuration="2.590401017s" podCreationTimestamp="2025-10-08 08:41:32 +0000 UTC" firstStartedPulling="2025-10-08 08:41:33.65952736 +0000 UTC m=+7636.789219961" lastFinishedPulling="2025-10-08 08:41:34.162572029 +0000 UTC m=+7637.292264640" observedRunningTime="2025-10-08 08:41:34.576385815 +0000 UTC m=+7637.706078456" watchObservedRunningTime="2025-10-08 08:41:34.590401017 +0000 UTC m=+7637.720093658" Oct 08 08:41:42 crc kubenswrapper[4958]: I1008 08:41:42.576381 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:41:42 crc kubenswrapper[4958]: E1008 08:41:42.577351 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:41:54 crc kubenswrapper[4958]: I1008 08:41:54.578198 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:41:54 crc kubenswrapper[4958]: E1008 08:41:54.579543 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.211178 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rp5rv"] Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.215194 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.226853 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rp5rv"] Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.321726 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fznbc\" (UniqueName: \"kubernetes.io/projected/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-kube-api-access-fznbc\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.321813 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-utilities\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.321944 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-catalog-content\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.423974 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fznbc\" (UniqueName: \"kubernetes.io/projected/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-kube-api-access-fznbc\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.424039 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-utilities\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.424129 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-catalog-content\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.424628 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-catalog-content\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.424814 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-utilities\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.452531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fznbc\" (UniqueName: \"kubernetes.io/projected/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-kube-api-access-fznbc\") pod \"community-operators-rp5rv\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:02 crc kubenswrapper[4958]: I1008 08:42:02.559801 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:03 crc kubenswrapper[4958]: I1008 08:42:03.073784 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rp5rv"] Oct 08 08:42:03 crc kubenswrapper[4958]: I1008 08:42:03.937124 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerID="c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428" exitCode=0 Oct 08 08:42:03 crc kubenswrapper[4958]: I1008 08:42:03.937193 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerDied","Data":"c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428"} Oct 08 08:42:03 crc kubenswrapper[4958]: I1008 08:42:03.937233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerStarted","Data":"0ad27a1e3a9f85316415a541aba33b4113af3cb94fc50c4a2600d8fa75b58a7d"} Oct 08 08:42:04 crc kubenswrapper[4958]: I1008 08:42:04.957215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerStarted","Data":"d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93"} Oct 08 08:42:05 crc kubenswrapper[4958]: I1008 08:42:05.974400 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerID="d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93" exitCode=0 Oct 08 08:42:05 crc kubenswrapper[4958]: I1008 08:42:05.974559 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerDied","Data":"d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93"} Oct 08 08:42:06 crc kubenswrapper[4958]: I1008 08:42:06.992517 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerStarted","Data":"e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a"} Oct 08 08:42:07 crc kubenswrapper[4958]: I1008 08:42:07.050793 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rp5rv" podStartSLOduration=2.504745003 podStartE2EDuration="5.050758277s" podCreationTimestamp="2025-10-08 08:42:02 +0000 UTC" firstStartedPulling="2025-10-08 08:42:03.940003998 +0000 UTC m=+7667.069696599" lastFinishedPulling="2025-10-08 08:42:06.486017242 +0000 UTC m=+7669.615709873" observedRunningTime="2025-10-08 08:42:07.021693563 +0000 UTC m=+7670.151386204" watchObservedRunningTime="2025-10-08 08:42:07.050758277 +0000 UTC m=+7670.180450878" Oct 08 08:42:08 crc kubenswrapper[4958]: I1008 08:42:08.577762 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:42:08 crc kubenswrapper[4958]: E1008 08:42:08.578428 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:42:12 crc kubenswrapper[4958]: I1008 08:42:12.560368 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:12 crc kubenswrapper[4958]: I1008 08:42:12.561024 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:12 crc kubenswrapper[4958]: I1008 08:42:12.616635 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:13 crc kubenswrapper[4958]: I1008 08:42:13.143542 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:13 crc kubenswrapper[4958]: I1008 08:42:13.200301 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rp5rv"] Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.102832 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rp5rv" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="registry-server" containerID="cri-o://e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a" gracePeriod=2 Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.693281 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.875858 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fznbc\" (UniqueName: \"kubernetes.io/projected/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-kube-api-access-fznbc\") pod \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.876107 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-utilities\") pod \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.876523 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-catalog-content\") pod \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\" (UID: \"4c91f54d-3cb0-439f-ac93-dfdbc98104fb\") " Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.878565 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-utilities" (OuterVolumeSpecName: "utilities") pod "4c91f54d-3cb0-439f-ac93-dfdbc98104fb" (UID: "4c91f54d-3cb0-439f-ac93-dfdbc98104fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.881765 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-kube-api-access-fznbc" (OuterVolumeSpecName: "kube-api-access-fznbc") pod "4c91f54d-3cb0-439f-ac93-dfdbc98104fb" (UID: "4c91f54d-3cb0-439f-ac93-dfdbc98104fb"). InnerVolumeSpecName "kube-api-access-fznbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.948238 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c91f54d-3cb0-439f-ac93-dfdbc98104fb" (UID: "4c91f54d-3cb0-439f-ac93-dfdbc98104fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.978686 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fznbc\" (UniqueName: \"kubernetes.io/projected/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-kube-api-access-fznbc\") on node \"crc\" DevicePath \"\"" Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.978848 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:42:15 crc kubenswrapper[4958]: I1008 08:42:15.978903 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c91f54d-3cb0-439f-ac93-dfdbc98104fb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.118597 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerID="e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a" exitCode=0 Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.118688 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rp5rv" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.118676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerDied","Data":"e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a"} Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.120000 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rp5rv" event={"ID":"4c91f54d-3cb0-439f-ac93-dfdbc98104fb","Type":"ContainerDied","Data":"0ad27a1e3a9f85316415a541aba33b4113af3cb94fc50c4a2600d8fa75b58a7d"} Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.120041 4958 scope.go:117] "RemoveContainer" containerID="e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.156846 4958 scope.go:117] "RemoveContainer" containerID="d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.163039 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rp5rv"] Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.175613 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rp5rv"] Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.187919 4958 scope.go:117] "RemoveContainer" containerID="c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.260335 4958 scope.go:117] "RemoveContainer" containerID="e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a" Oct 08 08:42:16 crc kubenswrapper[4958]: E1008 08:42:16.261265 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a\": container with ID starting with e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a not found: ID does not exist" containerID="e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.261317 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a"} err="failed to get container status \"e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a\": rpc error: code = NotFound desc = could not find container \"e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a\": container with ID starting with e05db9ef8a2e357525b0c39c55ca5b32ac6a0a62a5d9fc529a484843fddf431a not found: ID does not exist" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.261354 4958 scope.go:117] "RemoveContainer" containerID="d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93" Oct 08 08:42:16 crc kubenswrapper[4958]: E1008 08:42:16.264677 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93\": container with ID starting with d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93 not found: ID does not exist" containerID="d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.264737 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93"} err="failed to get container status \"d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93\": rpc error: code = NotFound desc = could not find container \"d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93\": container with ID starting with d9fcf785844023bd43f1a6335225023cbb7c88a84c203cdc4b5e661b65eb6f93 not found: ID does not exist" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.264774 4958 scope.go:117] "RemoveContainer" containerID="c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428" Oct 08 08:42:16 crc kubenswrapper[4958]: E1008 08:42:16.265213 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428\": container with ID starting with c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428 not found: ID does not exist" containerID="c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428" Oct 08 08:42:16 crc kubenswrapper[4958]: I1008 08:42:16.265251 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428"} err="failed to get container status \"c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428\": rpc error: code = NotFound desc = could not find container \"c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428\": container with ID starting with c9fc91c8fb5a2e6fe771d2b4e5682cd04ab54ffe0f877dfab577456a07b67428 not found: ID does not exist" Oct 08 08:42:17 crc kubenswrapper[4958]: I1008 08:42:17.596980 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" path="/var/lib/kubelet/pods/4c91f54d-3cb0-439f-ac93-dfdbc98104fb/volumes" Oct 08 08:42:19 crc kubenswrapper[4958]: I1008 08:42:19.583328 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:42:19 crc kubenswrapper[4958]: E1008 08:42:19.584304 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:42:34 crc kubenswrapper[4958]: I1008 08:42:34.576666 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:42:34 crc kubenswrapper[4958]: E1008 08:42:34.578031 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:42:45 crc kubenswrapper[4958]: I1008 08:42:45.576158 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:42:45 crc kubenswrapper[4958]: E1008 08:42:45.576932 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:42:58 crc kubenswrapper[4958]: I1008 08:42:58.577100 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:42:58 crc kubenswrapper[4958]: E1008 08:42:58.578391 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:43:09 crc kubenswrapper[4958]: I1008 08:43:09.577316 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:43:09 crc kubenswrapper[4958]: E1008 08:43:09.579147 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:43:12 crc kubenswrapper[4958]: I1008 08:43:12.842889 4958 generic.go:334] "Generic (PLEG): container finished" podID="19cd175f-7673-40a6-96d9-c34c47158241" containerID="0aa57bf372d9f8e3c14454cc70358305fc8dbd6fd8a9f6b5a78681b741d979c5" exitCode=0 Oct 08 08:43:12 crc kubenswrapper[4958]: I1008 08:43:12.842994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" event={"ID":"19cd175f-7673-40a6-96d9-c34c47158241","Type":"ContainerDied","Data":"0aa57bf372d9f8e3c14454cc70358305fc8dbd6fd8a9f6b5a78681b741d979c5"} Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.418812 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.528021 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlgfc\" (UniqueName: \"kubernetes.io/projected/19cd175f-7673-40a6-96d9-c34c47158241-kube-api-access-hlgfc\") pod \"19cd175f-7673-40a6-96d9-c34c47158241\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.528397 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-ssh-key\") pod \"19cd175f-7673-40a6-96d9-c34c47158241\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.528477 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-inventory\") pod \"19cd175f-7673-40a6-96d9-c34c47158241\" (UID: \"19cd175f-7673-40a6-96d9-c34c47158241\") " Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.542874 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19cd175f-7673-40a6-96d9-c34c47158241-kube-api-access-hlgfc" (OuterVolumeSpecName: "kube-api-access-hlgfc") pod "19cd175f-7673-40a6-96d9-c34c47158241" (UID: "19cd175f-7673-40a6-96d9-c34c47158241"). InnerVolumeSpecName "kube-api-access-hlgfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.586807 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "19cd175f-7673-40a6-96d9-c34c47158241" (UID: "19cd175f-7673-40a6-96d9-c34c47158241"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.607671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-inventory" (OuterVolumeSpecName: "inventory") pod "19cd175f-7673-40a6-96d9-c34c47158241" (UID: "19cd175f-7673-40a6-96d9-c34c47158241"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.635240 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlgfc\" (UniqueName: \"kubernetes.io/projected/19cd175f-7673-40a6-96d9-c34c47158241-kube-api-access-hlgfc\") on node \"crc\" DevicePath \"\"" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.635296 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.635314 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19cd175f-7673-40a6-96d9-c34c47158241-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.876922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" event={"ID":"19cd175f-7673-40a6-96d9-c34c47158241","Type":"ContainerDied","Data":"f36cf1bd4e38972066428cde09f490060066276620d29675a61cee931de4c9f8"} Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.877025 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36cf1bd4e38972066428cde09f490060066276620d29675a61cee931de4c9f8" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.877052 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-fthbc" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.973772 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-xl85k"] Oct 08 08:43:14 crc kubenswrapper[4958]: E1008 08:43:14.975046 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19cd175f-7673-40a6-96d9-c34c47158241" containerName="download-cache-openstack-openstack-cell1" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.975079 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="19cd175f-7673-40a6-96d9-c34c47158241" containerName="download-cache-openstack-openstack-cell1" Oct 08 08:43:14 crc kubenswrapper[4958]: E1008 08:43:14.975130 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="extract-utilities" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.975144 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="extract-utilities" Oct 08 08:43:14 crc kubenswrapper[4958]: E1008 08:43:14.975176 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="registry-server" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.975187 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="registry-server" Oct 08 08:43:14 crc kubenswrapper[4958]: E1008 08:43:14.975225 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="extract-content" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.975236 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="extract-content" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.975636 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c91f54d-3cb0-439f-ac93-dfdbc98104fb" containerName="registry-server" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.975677 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="19cd175f-7673-40a6-96d9-c34c47158241" containerName="download-cache-openstack-openstack-cell1" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.979234 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.982524 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.983671 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.983854 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.984016 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:43:14 crc kubenswrapper[4958]: I1008 08:43:14.986527 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-xl85k"] Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.047278 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-inventory\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.047422 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.047571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqvz\" (UniqueName: \"kubernetes.io/projected/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-kube-api-access-jpqvz\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.150349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.150607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpqvz\" (UniqueName: \"kubernetes.io/projected/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-kube-api-access-jpqvz\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.150681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-inventory\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.154893 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-inventory\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.155753 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.170072 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpqvz\" (UniqueName: \"kubernetes.io/projected/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-kube-api-access-jpqvz\") pod \"configure-network-openstack-openstack-cell1-xl85k\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.349342 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.766632 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-xl85k"] Oct 08 08:43:15 crc kubenswrapper[4958]: I1008 08:43:15.889573 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" event={"ID":"b3ecaaf8-2e47-4f83-b958-d53fac1a6316","Type":"ContainerStarted","Data":"103acfbc0db17ede19abe35eebea7b2a95cbc5de7912f2d234cf5fa5f7138f16"} Oct 08 08:43:16 crc kubenswrapper[4958]: I1008 08:43:16.905811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" event={"ID":"b3ecaaf8-2e47-4f83-b958-d53fac1a6316","Type":"ContainerStarted","Data":"68c1300a693586d42cb06c44b7666d220060f16efe59ffe3befdf89267560c8d"} Oct 08 08:43:20 crc kubenswrapper[4958]: I1008 08:43:20.577181 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:43:20 crc kubenswrapper[4958]: E1008 08:43:20.578354 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:43:35 crc kubenswrapper[4958]: I1008 08:43:35.577471 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:43:35 crc kubenswrapper[4958]: E1008 08:43:35.578604 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:43:50 crc kubenswrapper[4958]: I1008 08:43:50.577479 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:43:51 crc kubenswrapper[4958]: I1008 08:43:51.326454 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"9792847903de8882201e5c6f9c09f7860e8eac25dd2466f7b851b233acc9c4e0"} Oct 08 08:43:51 crc kubenswrapper[4958]: I1008 08:43:51.353745 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" podStartSLOduration=36.628627665 podStartE2EDuration="37.353710957s" podCreationTimestamp="2025-10-08 08:43:14 +0000 UTC" firstStartedPulling="2025-10-08 08:43:15.7708915 +0000 UTC m=+7738.900584101" lastFinishedPulling="2025-10-08 08:43:16.495974782 +0000 UTC m=+7739.625667393" observedRunningTime="2025-10-08 08:43:16.930932684 +0000 UTC m=+7740.060625335" watchObservedRunningTime="2025-10-08 08:43:51.353710957 +0000 UTC m=+7774.483403598" Oct 08 08:44:40 crc kubenswrapper[4958]: I1008 08:44:40.912575 4958 generic.go:334] "Generic (PLEG): container finished" podID="b3ecaaf8-2e47-4f83-b958-d53fac1a6316" containerID="68c1300a693586d42cb06c44b7666d220060f16efe59ffe3befdf89267560c8d" exitCode=0 Oct 08 08:44:40 crc kubenswrapper[4958]: I1008 08:44:40.912679 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" event={"ID":"b3ecaaf8-2e47-4f83-b958-d53fac1a6316","Type":"ContainerDied","Data":"68c1300a693586d42cb06c44b7666d220060f16efe59ffe3befdf89267560c8d"} Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.409120 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.501171 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpqvz\" (UniqueName: \"kubernetes.io/projected/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-kube-api-access-jpqvz\") pod \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.501329 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key\") pod \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.501476 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-inventory\") pod \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.507482 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-kube-api-access-jpqvz" (OuterVolumeSpecName: "kube-api-access-jpqvz") pod "b3ecaaf8-2e47-4f83-b958-d53fac1a6316" (UID: "b3ecaaf8-2e47-4f83-b958-d53fac1a6316"). InnerVolumeSpecName "kube-api-access-jpqvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:44:42 crc kubenswrapper[4958]: E1008 08:44:42.527255 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key podName:b3ecaaf8-2e47-4f83-b958-d53fac1a6316 nodeName:}" failed. No retries permitted until 2025-10-08 08:44:43.027230815 +0000 UTC m=+7826.156923416 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key") pod "b3ecaaf8-2e47-4f83-b958-d53fac1a6316" (UID: "b3ecaaf8-2e47-4f83-b958-d53fac1a6316") : error deleting /var/lib/kubelet/pods/b3ecaaf8-2e47-4f83-b958-d53fac1a6316/volume-subpaths: remove /var/lib/kubelet/pods/b3ecaaf8-2e47-4f83-b958-d53fac1a6316/volume-subpaths: no such file or directory Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.529507 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-inventory" (OuterVolumeSpecName: "inventory") pod "b3ecaaf8-2e47-4f83-b958-d53fac1a6316" (UID: "b3ecaaf8-2e47-4f83-b958-d53fac1a6316"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.604369 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpqvz\" (UniqueName: \"kubernetes.io/projected/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-kube-api-access-jpqvz\") on node \"crc\" DevicePath \"\"" Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.604409 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.948749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" event={"ID":"b3ecaaf8-2e47-4f83-b958-d53fac1a6316","Type":"ContainerDied","Data":"103acfbc0db17ede19abe35eebea7b2a95cbc5de7912f2d234cf5fa5f7138f16"} Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.948803 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103acfbc0db17ede19abe35eebea7b2a95cbc5de7912f2d234cf5fa5f7138f16" Oct 08 08:44:42 crc kubenswrapper[4958]: I1008 08:44:42.948884 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xl85k" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.038976 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-jds6t"] Oct 08 08:44:43 crc kubenswrapper[4958]: E1008 08:44:43.039390 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ecaaf8-2e47-4f83-b958-d53fac1a6316" containerName="configure-network-openstack-openstack-cell1" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.039408 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ecaaf8-2e47-4f83-b958-d53fac1a6316" containerName="configure-network-openstack-openstack-cell1" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.039650 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ecaaf8-2e47-4f83-b958-d53fac1a6316" containerName="configure-network-openstack-openstack-cell1" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.040367 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.048781 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-jds6t"] Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.114039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key\") pod \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\" (UID: \"b3ecaaf8-2e47-4f83-b958-d53fac1a6316\") " Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.118626 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b3ecaaf8-2e47-4f83-b958-d53fac1a6316" (UID: "b3ecaaf8-2e47-4f83-b958-d53fac1a6316"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.216555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-ssh-key\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.216809 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnvx\" (UniqueName: \"kubernetes.io/projected/f1189674-59af-4819-9d1e-024fc8c0d457-kube-api-access-pcnvx\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.216880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-inventory\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.217089 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3ecaaf8-2e47-4f83-b958-d53fac1a6316-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.319318 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnvx\" (UniqueName: \"kubernetes.io/projected/f1189674-59af-4819-9d1e-024fc8c0d457-kube-api-access-pcnvx\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.319481 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-inventory\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.319673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-ssh-key\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.323225 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-ssh-key\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.323520 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-inventory\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.342805 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnvx\" (UniqueName: \"kubernetes.io/projected/f1189674-59af-4819-9d1e-024fc8c0d457-kube-api-access-pcnvx\") pod \"validate-network-openstack-openstack-cell1-jds6t\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.419151 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.992017 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-jds6t"] Oct 08 08:44:43 crc kubenswrapper[4958]: I1008 08:44:43.994384 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:44:44 crc kubenswrapper[4958]: I1008 08:44:44.970525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" event={"ID":"f1189674-59af-4819-9d1e-024fc8c0d457","Type":"ContainerStarted","Data":"246905b88b37d9c119ca96b6d10c2049badb40f595b44597c53371e4bfa8bf0c"} Oct 08 08:44:45 crc kubenswrapper[4958]: I1008 08:44:45.997240 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" event={"ID":"f1189674-59af-4819-9d1e-024fc8c0d457","Type":"ContainerStarted","Data":"8c09d7953cc3a80dc77377b69d32c88f75acfa04777ccac68cee84e6cfce50cc"} Oct 08 08:44:46 crc kubenswrapper[4958]: I1008 08:44:46.020672 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" podStartSLOduration=2.300338467 podStartE2EDuration="3.020650228s" podCreationTimestamp="2025-10-08 08:44:43 +0000 UTC" firstStartedPulling="2025-10-08 08:44:43.994065822 +0000 UTC m=+7827.123758433" lastFinishedPulling="2025-10-08 08:44:44.714377593 +0000 UTC m=+7827.844070194" observedRunningTime="2025-10-08 08:44:46.019122596 +0000 UTC m=+7829.148815207" watchObservedRunningTime="2025-10-08 08:44:46.020650228 +0000 UTC m=+7829.150342829" Oct 08 08:44:50 crc kubenswrapper[4958]: I1008 08:44:50.045488 4958 generic.go:334] "Generic (PLEG): container finished" podID="f1189674-59af-4819-9d1e-024fc8c0d457" containerID="8c09d7953cc3a80dc77377b69d32c88f75acfa04777ccac68cee84e6cfce50cc" exitCode=0 Oct 08 08:44:50 crc kubenswrapper[4958]: I1008 08:44:50.045595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" event={"ID":"f1189674-59af-4819-9d1e-024fc8c0d457","Type":"ContainerDied","Data":"8c09d7953cc3a80dc77377b69d32c88f75acfa04777ccac68cee84e6cfce50cc"} Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.538060 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.625558 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-inventory\") pod \"f1189674-59af-4819-9d1e-024fc8c0d457\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.625726 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcnvx\" (UniqueName: \"kubernetes.io/projected/f1189674-59af-4819-9d1e-024fc8c0d457-kube-api-access-pcnvx\") pod \"f1189674-59af-4819-9d1e-024fc8c0d457\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.626186 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-ssh-key\") pod \"f1189674-59af-4819-9d1e-024fc8c0d457\" (UID: \"f1189674-59af-4819-9d1e-024fc8c0d457\") " Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.631063 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1189674-59af-4819-9d1e-024fc8c0d457-kube-api-access-pcnvx" (OuterVolumeSpecName: "kube-api-access-pcnvx") pod "f1189674-59af-4819-9d1e-024fc8c0d457" (UID: "f1189674-59af-4819-9d1e-024fc8c0d457"). InnerVolumeSpecName "kube-api-access-pcnvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.664617 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-inventory" (OuterVolumeSpecName: "inventory") pod "f1189674-59af-4819-9d1e-024fc8c0d457" (UID: "f1189674-59af-4819-9d1e-024fc8c0d457"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.676695 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1189674-59af-4819-9d1e-024fc8c0d457" (UID: "f1189674-59af-4819-9d1e-024fc8c0d457"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.729326 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcnvx\" (UniqueName: \"kubernetes.io/projected/f1189674-59af-4819-9d1e-024fc8c0d457-kube-api-access-pcnvx\") on node \"crc\" DevicePath \"\"" Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.730183 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:44:51 crc kubenswrapper[4958]: I1008 08:44:51.730259 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1189674-59af-4819-9d1e-024fc8c0d457-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.074589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" event={"ID":"f1189674-59af-4819-9d1e-024fc8c0d457","Type":"ContainerDied","Data":"246905b88b37d9c119ca96b6d10c2049badb40f595b44597c53371e4bfa8bf0c"} Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.074874 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246905b88b37d9c119ca96b6d10c2049badb40f595b44597c53371e4bfa8bf0c" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.074939 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-jds6t" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.155058 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-l4sgd"] Oct 08 08:44:52 crc kubenswrapper[4958]: E1008 08:44:52.155932 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1189674-59af-4819-9d1e-024fc8c0d457" containerName="validate-network-openstack-openstack-cell1" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.156098 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1189674-59af-4819-9d1e-024fc8c0d457" containerName="validate-network-openstack-openstack-cell1" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.156462 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1189674-59af-4819-9d1e-024fc8c0d457" containerName="validate-network-openstack-openstack-cell1" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.157757 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.160536 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.160596 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.160705 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.160728 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.166475 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-l4sgd"] Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.243631 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-ssh-key\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.243672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-inventory\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.244275 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l7m8\" (UniqueName: \"kubernetes.io/projected/566119a8-2995-4435-a0de-fba57da4718c-kube-api-access-4l7m8\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.347030 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l7m8\" (UniqueName: \"kubernetes.io/projected/566119a8-2995-4435-a0de-fba57da4718c-kube-api-access-4l7m8\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.347626 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-ssh-key\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.347653 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-inventory\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.352926 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-ssh-key\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.355303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-inventory\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.373429 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l7m8\" (UniqueName: \"kubernetes.io/projected/566119a8-2995-4435-a0de-fba57da4718c-kube-api-access-4l7m8\") pod \"install-os-openstack-openstack-cell1-l4sgd\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:52 crc kubenswrapper[4958]: I1008 08:44:52.476565 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:44:53 crc kubenswrapper[4958]: I1008 08:44:53.054406 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-l4sgd"] Oct 08 08:44:53 crc kubenswrapper[4958]: I1008 08:44:53.093765 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" event={"ID":"566119a8-2995-4435-a0de-fba57da4718c","Type":"ContainerStarted","Data":"60d1cb3fe36176612ae9781118cd0b093d23f5fcf7cf82974bd9ba8930c091b5"} Oct 08 08:44:54 crc kubenswrapper[4958]: I1008 08:44:54.106812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" event={"ID":"566119a8-2995-4435-a0de-fba57da4718c","Type":"ContainerStarted","Data":"8531ff35db7940374acef709d89a8bea88299af3cc5b39c09fee6310bc8f4e7d"} Oct 08 08:44:54 crc kubenswrapper[4958]: I1008 08:44:54.126752 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" podStartSLOduration=1.6320954730000001 podStartE2EDuration="2.126726305s" podCreationTimestamp="2025-10-08 08:44:52 +0000 UTC" firstStartedPulling="2025-10-08 08:44:53.065186109 +0000 UTC m=+7836.194878710" lastFinishedPulling="2025-10-08 08:44:53.559816901 +0000 UTC m=+7836.689509542" observedRunningTime="2025-10-08 08:44:54.124236987 +0000 UTC m=+7837.253929668" watchObservedRunningTime="2025-10-08 08:44:54.126726305 +0000 UTC m=+7837.256418946" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.173118 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx"] Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.179224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.185510 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.185639 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.187447 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx"] Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.270765 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13a0c8de-83e7-47e6-9234-92c1886af049-config-volume\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.271066 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7xdx\" (UniqueName: \"kubernetes.io/projected/13a0c8de-83e7-47e6-9234-92c1886af049-kube-api-access-h7xdx\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.271196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13a0c8de-83e7-47e6-9234-92c1886af049-secret-volume\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.373064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7xdx\" (UniqueName: \"kubernetes.io/projected/13a0c8de-83e7-47e6-9234-92c1886af049-kube-api-access-h7xdx\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.373166 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13a0c8de-83e7-47e6-9234-92c1886af049-secret-volume\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.373343 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13a0c8de-83e7-47e6-9234-92c1886af049-config-volume\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.375417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13a0c8de-83e7-47e6-9234-92c1886af049-config-volume\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.394721 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13a0c8de-83e7-47e6-9234-92c1886af049-secret-volume\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.397717 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7xdx\" (UniqueName: \"kubernetes.io/projected/13a0c8de-83e7-47e6-9234-92c1886af049-kube-api-access-h7xdx\") pod \"collect-profiles-29331885-d28dx\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.507699 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:00 crc kubenswrapper[4958]: I1008 08:45:00.978768 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx"] Oct 08 08:45:01 crc kubenswrapper[4958]: I1008 08:45:01.196750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" event={"ID":"13a0c8de-83e7-47e6-9234-92c1886af049","Type":"ContainerStarted","Data":"9df52c463d4d5f7ec42af5148e4b00638d5f03c81fb2e6213f09a45fc5243482"} Oct 08 08:45:01 crc kubenswrapper[4958]: I1008 08:45:01.197350 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" event={"ID":"13a0c8de-83e7-47e6-9234-92c1886af049","Type":"ContainerStarted","Data":"dd5f186010f606063c9e1a3854d68ec3d2682e7766c01b3e7dc8c5e80dda863d"} Oct 08 08:45:01 crc kubenswrapper[4958]: I1008 08:45:01.215652 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" podStartSLOduration=1.215631637 podStartE2EDuration="1.215631637s" podCreationTimestamp="2025-10-08 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 08:45:01.212616615 +0000 UTC m=+7844.342309226" watchObservedRunningTime="2025-10-08 08:45:01.215631637 +0000 UTC m=+7844.345324238" Oct 08 08:45:02 crc kubenswrapper[4958]: I1008 08:45:02.210974 4958 generic.go:334] "Generic (PLEG): container finished" podID="13a0c8de-83e7-47e6-9234-92c1886af049" containerID="9df52c463d4d5f7ec42af5148e4b00638d5f03c81fb2e6213f09a45fc5243482" exitCode=0 Oct 08 08:45:02 crc kubenswrapper[4958]: I1008 08:45:02.211024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" event={"ID":"13a0c8de-83e7-47e6-9234-92c1886af049","Type":"ContainerDied","Data":"9df52c463d4d5f7ec42af5148e4b00638d5f03c81fb2e6213f09a45fc5243482"} Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.641085 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.746975 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7xdx\" (UniqueName: \"kubernetes.io/projected/13a0c8de-83e7-47e6-9234-92c1886af049-kube-api-access-h7xdx\") pod \"13a0c8de-83e7-47e6-9234-92c1886af049\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.747334 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13a0c8de-83e7-47e6-9234-92c1886af049-secret-volume\") pod \"13a0c8de-83e7-47e6-9234-92c1886af049\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.747397 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13a0c8de-83e7-47e6-9234-92c1886af049-config-volume\") pod \"13a0c8de-83e7-47e6-9234-92c1886af049\" (UID: \"13a0c8de-83e7-47e6-9234-92c1886af049\") " Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.748426 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a0c8de-83e7-47e6-9234-92c1886af049-config-volume" (OuterVolumeSpecName: "config-volume") pod "13a0c8de-83e7-47e6-9234-92c1886af049" (UID: "13a0c8de-83e7-47e6-9234-92c1886af049"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.753895 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a0c8de-83e7-47e6-9234-92c1886af049-kube-api-access-h7xdx" (OuterVolumeSpecName: "kube-api-access-h7xdx") pod "13a0c8de-83e7-47e6-9234-92c1886af049" (UID: "13a0c8de-83e7-47e6-9234-92c1886af049"). InnerVolumeSpecName "kube-api-access-h7xdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.754681 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a0c8de-83e7-47e6-9234-92c1886af049-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "13a0c8de-83e7-47e6-9234-92c1886af049" (UID: "13a0c8de-83e7-47e6-9234-92c1886af049"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.850837 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13a0c8de-83e7-47e6-9234-92c1886af049-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.850893 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13a0c8de-83e7-47e6-9234-92c1886af049-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 08:45:03 crc kubenswrapper[4958]: I1008 08:45:03.850915 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7xdx\" (UniqueName: \"kubernetes.io/projected/13a0c8de-83e7-47e6-9234-92c1886af049-kube-api-access-h7xdx\") on node \"crc\" DevicePath \"\"" Oct 08 08:45:04 crc kubenswrapper[4958]: I1008 08:45:04.237203 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" event={"ID":"13a0c8de-83e7-47e6-9234-92c1886af049","Type":"ContainerDied","Data":"dd5f186010f606063c9e1a3854d68ec3d2682e7766c01b3e7dc8c5e80dda863d"} Oct 08 08:45:04 crc kubenswrapper[4958]: I1008 08:45:04.237528 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd5f186010f606063c9e1a3854d68ec3d2682e7766c01b3e7dc8c5e80dda863d" Oct 08 08:45:04 crc kubenswrapper[4958]: I1008 08:45:04.237292 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx" Oct 08 08:45:04 crc kubenswrapper[4958]: I1008 08:45:04.304823 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd"] Oct 08 08:45:04 crc kubenswrapper[4958]: I1008 08:45:04.314936 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331840-hwwjd"] Oct 08 08:45:05 crc kubenswrapper[4958]: I1008 08:45:05.606722 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bf775e-4caf-4e56-bdae-82da59751def" path="/var/lib/kubelet/pods/c7bf775e-4caf-4e56-bdae-82da59751def/volumes" Oct 08 08:45:43 crc kubenswrapper[4958]: I1008 08:45:43.701051 4958 generic.go:334] "Generic (PLEG): container finished" podID="566119a8-2995-4435-a0de-fba57da4718c" containerID="8531ff35db7940374acef709d89a8bea88299af3cc5b39c09fee6310bc8f4e7d" exitCode=0 Oct 08 08:45:43 crc kubenswrapper[4958]: I1008 08:45:43.701112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" event={"ID":"566119a8-2995-4435-a0de-fba57da4718c","Type":"ContainerDied","Data":"8531ff35db7940374acef709d89a8bea88299af3cc5b39c09fee6310bc8f4e7d"} Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.224643 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.289517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-inventory\") pod \"566119a8-2995-4435-a0de-fba57da4718c\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.289728 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-ssh-key\") pod \"566119a8-2995-4435-a0de-fba57da4718c\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.289822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l7m8\" (UniqueName: \"kubernetes.io/projected/566119a8-2995-4435-a0de-fba57da4718c-kube-api-access-4l7m8\") pod \"566119a8-2995-4435-a0de-fba57da4718c\" (UID: \"566119a8-2995-4435-a0de-fba57da4718c\") " Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.297806 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566119a8-2995-4435-a0de-fba57da4718c-kube-api-access-4l7m8" (OuterVolumeSpecName: "kube-api-access-4l7m8") pod "566119a8-2995-4435-a0de-fba57da4718c" (UID: "566119a8-2995-4435-a0de-fba57da4718c"). InnerVolumeSpecName "kube-api-access-4l7m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.324980 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-inventory" (OuterVolumeSpecName: "inventory") pod "566119a8-2995-4435-a0de-fba57da4718c" (UID: "566119a8-2995-4435-a0de-fba57da4718c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.334699 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "566119a8-2995-4435-a0de-fba57da4718c" (UID: "566119a8-2995-4435-a0de-fba57da4718c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.392979 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.393028 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l7m8\" (UniqueName: \"kubernetes.io/projected/566119a8-2995-4435-a0de-fba57da4718c-kube-api-access-4l7m8\") on node \"crc\" DevicePath \"\"" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.393046 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/566119a8-2995-4435-a0de-fba57da4718c-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.726088 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" event={"ID":"566119a8-2995-4435-a0de-fba57da4718c","Type":"ContainerDied","Data":"60d1cb3fe36176612ae9781118cd0b093d23f5fcf7cf82974bd9ba8930c091b5"} Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.726168 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d1cb3fe36176612ae9781118cd0b093d23f5fcf7cf82974bd9ba8930c091b5" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.726173 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-l4sgd" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.846689 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-lf2pg"] Oct 08 08:45:45 crc kubenswrapper[4958]: E1008 08:45:45.848108 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a0c8de-83e7-47e6-9234-92c1886af049" containerName="collect-profiles" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.848134 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a0c8de-83e7-47e6-9234-92c1886af049" containerName="collect-profiles" Oct 08 08:45:45 crc kubenswrapper[4958]: E1008 08:45:45.848188 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566119a8-2995-4435-a0de-fba57da4718c" containerName="install-os-openstack-openstack-cell1" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.848199 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="566119a8-2995-4435-a0de-fba57da4718c" containerName="install-os-openstack-openstack-cell1" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.848992 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="566119a8-2995-4435-a0de-fba57da4718c" containerName="install-os-openstack-openstack-cell1" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.849065 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a0c8de-83e7-47e6-9234-92c1886af049" containerName="collect-profiles" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.850497 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.859110 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.861894 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.862511 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.863209 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.894501 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-lf2pg"] Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.902400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-inventory\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.902474 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hkm\" (UniqueName: \"kubernetes.io/projected/5489b57b-58cd-4678-aa3e-6cda0bc13926-kube-api-access-f7hkm\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:45 crc kubenswrapper[4958]: I1008 08:45:45.902601 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-ssh-key\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.005140 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-inventory\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.005415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7hkm\" (UniqueName: \"kubernetes.io/projected/5489b57b-58cd-4678-aa3e-6cda0bc13926-kube-api-access-f7hkm\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.005488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-ssh-key\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.009782 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-inventory\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.011218 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-ssh-key\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.041649 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7hkm\" (UniqueName: \"kubernetes.io/projected/5489b57b-58cd-4678-aa3e-6cda0bc13926-kube-api-access-f7hkm\") pod \"configure-os-openstack-openstack-cell1-lf2pg\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.184340 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.590125 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-lf2pg"] Oct 08 08:45:46 crc kubenswrapper[4958]: I1008 08:45:46.737075 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" event={"ID":"5489b57b-58cd-4678-aa3e-6cda0bc13926","Type":"ContainerStarted","Data":"db83d52881058a8e224485d807630222cb66c2aae304d8d822bc968f25ad4c5c"} Oct 08 08:45:47 crc kubenswrapper[4958]: I1008 08:45:47.748693 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" event={"ID":"5489b57b-58cd-4678-aa3e-6cda0bc13926","Type":"ContainerStarted","Data":"06b6c13701464ac5c867122b211959332b9c4f0f2e74ed56cf852a525dc83b89"} Oct 08 08:45:47 crc kubenswrapper[4958]: I1008 08:45:47.771864 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" podStartSLOduration=2.166646025 podStartE2EDuration="2.771843904s" podCreationTimestamp="2025-10-08 08:45:45 +0000 UTC" firstStartedPulling="2025-10-08 08:45:46.600498771 +0000 UTC m=+7889.730191412" lastFinishedPulling="2025-10-08 08:45:47.20569668 +0000 UTC m=+7890.335389291" observedRunningTime="2025-10-08 08:45:47.768469021 +0000 UTC m=+7890.898161632" watchObservedRunningTime="2025-10-08 08:45:47.771843904 +0000 UTC m=+7890.901536515" Oct 08 08:45:48 crc kubenswrapper[4958]: I1008 08:45:48.503887 4958 scope.go:117] "RemoveContainer" containerID="ff654a90469f9cc0217471bdb33e28289aa7756e894283eb10ee309c1923831f" Oct 08 08:46:06 crc kubenswrapper[4958]: I1008 08:46:06.845362 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:46:06 crc kubenswrapper[4958]: I1008 08:46:06.847285 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:46:35 crc kubenswrapper[4958]: I1008 08:46:35.411434 4958 generic.go:334] "Generic (PLEG): container finished" podID="5489b57b-58cd-4678-aa3e-6cda0bc13926" containerID="06b6c13701464ac5c867122b211959332b9c4f0f2e74ed56cf852a525dc83b89" exitCode=0 Oct 08 08:46:35 crc kubenswrapper[4958]: I1008 08:46:35.411564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" event={"ID":"5489b57b-58cd-4678-aa3e-6cda0bc13926","Type":"ContainerDied","Data":"06b6c13701464ac5c867122b211959332b9c4f0f2e74ed56cf852a525dc83b89"} Oct 08 08:46:36 crc kubenswrapper[4958]: I1008 08:46:36.844537 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:46:36 crc kubenswrapper[4958]: I1008 08:46:36.844914 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:46:36 crc kubenswrapper[4958]: I1008 08:46:36.918743 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.043847 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-ssh-key\") pod \"5489b57b-58cd-4678-aa3e-6cda0bc13926\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.044155 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7hkm\" (UniqueName: \"kubernetes.io/projected/5489b57b-58cd-4678-aa3e-6cda0bc13926-kube-api-access-f7hkm\") pod \"5489b57b-58cd-4678-aa3e-6cda0bc13926\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.044394 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-inventory\") pod \"5489b57b-58cd-4678-aa3e-6cda0bc13926\" (UID: \"5489b57b-58cd-4678-aa3e-6cda0bc13926\") " Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.051050 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5489b57b-58cd-4678-aa3e-6cda0bc13926-kube-api-access-f7hkm" (OuterVolumeSpecName: "kube-api-access-f7hkm") pod "5489b57b-58cd-4678-aa3e-6cda0bc13926" (UID: "5489b57b-58cd-4678-aa3e-6cda0bc13926"). InnerVolumeSpecName "kube-api-access-f7hkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.085916 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-inventory" (OuterVolumeSpecName: "inventory") pod "5489b57b-58cd-4678-aa3e-6cda0bc13926" (UID: "5489b57b-58cd-4678-aa3e-6cda0bc13926"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.086027 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5489b57b-58cd-4678-aa3e-6cda0bc13926" (UID: "5489b57b-58cd-4678-aa3e-6cda0bc13926"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.148279 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.148337 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7hkm\" (UniqueName: \"kubernetes.io/projected/5489b57b-58cd-4678-aa3e-6cda0bc13926-kube-api-access-f7hkm\") on node \"crc\" DevicePath \"\"" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.148359 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5489b57b-58cd-4678-aa3e-6cda0bc13926-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.441831 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" event={"ID":"5489b57b-58cd-4678-aa3e-6cda0bc13926","Type":"ContainerDied","Data":"db83d52881058a8e224485d807630222cb66c2aae304d8d822bc968f25ad4c5c"} Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.442149 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db83d52881058a8e224485d807630222cb66c2aae304d8d822bc968f25ad4c5c" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.441925 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-lf2pg" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.540295 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-r9wb9"] Oct 08 08:46:37 crc kubenswrapper[4958]: E1008 08:46:37.541309 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5489b57b-58cd-4678-aa3e-6cda0bc13926" containerName="configure-os-openstack-openstack-cell1" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.541327 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5489b57b-58cd-4678-aa3e-6cda0bc13926" containerName="configure-os-openstack-openstack-cell1" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.541562 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5489b57b-58cd-4678-aa3e-6cda0bc13926" containerName="configure-os-openstack-openstack-cell1" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.542366 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.546405 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.546729 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.546876 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.552156 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.558061 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-r9wb9"] Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.660870 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.661102 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-inventory-0\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.661483 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgmf\" (UniqueName: \"kubernetes.io/projected/adf74cc2-101a-479f-bcbb-9b298a3875bc-kube-api-access-sqgmf\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.763292 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.763389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-inventory-0\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.763527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgmf\" (UniqueName: \"kubernetes.io/projected/adf74cc2-101a-479f-bcbb-9b298a3875bc-kube-api-access-sqgmf\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.770030 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-inventory-0\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.773516 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.808784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgmf\" (UniqueName: \"kubernetes.io/projected/adf74cc2-101a-479f-bcbb-9b298a3875bc-kube-api-access-sqgmf\") pod \"ssh-known-hosts-openstack-r9wb9\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:37 crc kubenswrapper[4958]: I1008 08:46:37.873597 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:38 crc kubenswrapper[4958]: I1008 08:46:38.536523 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-r9wb9"] Oct 08 08:46:39 crc kubenswrapper[4958]: I1008 08:46:39.480618 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-r9wb9" event={"ID":"adf74cc2-101a-479f-bcbb-9b298a3875bc","Type":"ContainerStarted","Data":"f8b17cdf9ddf481f2c85b5342d4f99c92e81d4f3666d0ebd0b0abdd37f4ab284"} Oct 08 08:46:39 crc kubenswrapper[4958]: I1008 08:46:39.480976 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-r9wb9" event={"ID":"adf74cc2-101a-479f-bcbb-9b298a3875bc","Type":"ContainerStarted","Data":"3cdff1a39565583fe26b3bf98ab28d475b9de646f999989b89fcf9822fb66374"} Oct 08 08:46:39 crc kubenswrapper[4958]: I1008 08:46:39.501963 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-r9wb9" podStartSLOduration=1.894191523 podStartE2EDuration="2.501927131s" podCreationTimestamp="2025-10-08 08:46:37 +0000 UTC" firstStartedPulling="2025-10-08 08:46:38.54346166 +0000 UTC m=+7941.673154261" lastFinishedPulling="2025-10-08 08:46:39.151197258 +0000 UTC m=+7942.280889869" observedRunningTime="2025-10-08 08:46:39.498619101 +0000 UTC m=+7942.628311712" watchObservedRunningTime="2025-10-08 08:46:39.501927131 +0000 UTC m=+7942.631619732" Oct 08 08:46:48 crc kubenswrapper[4958]: I1008 08:46:48.579914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-r9wb9" event={"ID":"adf74cc2-101a-479f-bcbb-9b298a3875bc","Type":"ContainerDied","Data":"f8b17cdf9ddf481f2c85b5342d4f99c92e81d4f3666d0ebd0b0abdd37f4ab284"} Oct 08 08:46:48 crc kubenswrapper[4958]: I1008 08:46:48.579935 4958 generic.go:334] "Generic (PLEG): container finished" podID="adf74cc2-101a-479f-bcbb-9b298a3875bc" containerID="f8b17cdf9ddf481f2c85b5342d4f99c92e81d4f3666d0ebd0b0abdd37f4ab284" exitCode=0 Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.222548 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.346272 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqgmf\" (UniqueName: \"kubernetes.io/projected/adf74cc2-101a-479f-bcbb-9b298a3875bc-kube-api-access-sqgmf\") pod \"adf74cc2-101a-479f-bcbb-9b298a3875bc\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.346380 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-inventory-0\") pod \"adf74cc2-101a-479f-bcbb-9b298a3875bc\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.346421 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-ssh-key-openstack-cell1\") pod \"adf74cc2-101a-479f-bcbb-9b298a3875bc\" (UID: \"adf74cc2-101a-479f-bcbb-9b298a3875bc\") " Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.365179 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf74cc2-101a-479f-bcbb-9b298a3875bc-kube-api-access-sqgmf" (OuterVolumeSpecName: "kube-api-access-sqgmf") pod "adf74cc2-101a-479f-bcbb-9b298a3875bc" (UID: "adf74cc2-101a-479f-bcbb-9b298a3875bc"). InnerVolumeSpecName "kube-api-access-sqgmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.380767 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "adf74cc2-101a-479f-bcbb-9b298a3875bc" (UID: "adf74cc2-101a-479f-bcbb-9b298a3875bc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.387179 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "adf74cc2-101a-479f-bcbb-9b298a3875bc" (UID: "adf74cc2-101a-479f-bcbb-9b298a3875bc"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.449278 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqgmf\" (UniqueName: \"kubernetes.io/projected/adf74cc2-101a-479f-bcbb-9b298a3875bc-kube-api-access-sqgmf\") on node \"crc\" DevicePath \"\"" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.449330 4958 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.449348 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/adf74cc2-101a-479f-bcbb-9b298a3875bc-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.610910 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-r9wb9" event={"ID":"adf74cc2-101a-479f-bcbb-9b298a3875bc","Type":"ContainerDied","Data":"3cdff1a39565583fe26b3bf98ab28d475b9de646f999989b89fcf9822fb66374"} Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.611342 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cdff1a39565583fe26b3bf98ab28d475b9de646f999989b89fcf9822fb66374" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.611212 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-r9wb9" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.705399 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-dnxt5"] Oct 08 08:46:50 crc kubenswrapper[4958]: E1008 08:46:50.706017 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf74cc2-101a-479f-bcbb-9b298a3875bc" containerName="ssh-known-hosts-openstack" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.706038 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf74cc2-101a-479f-bcbb-9b298a3875bc" containerName="ssh-known-hosts-openstack" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.706334 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf74cc2-101a-479f-bcbb-9b298a3875bc" containerName="ssh-known-hosts-openstack" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.707296 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.709609 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.709903 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.710541 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.710621 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.726861 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-dnxt5"] Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.857972 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-inventory\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.858823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-ssh-key\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.859008 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdt6d\" (UniqueName: \"kubernetes.io/projected/b134e035-bf32-4f2f-a14f-24105e9bcb87-kube-api-access-qdt6d\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.961248 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-ssh-key\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.961318 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdt6d\" (UniqueName: \"kubernetes.io/projected/b134e035-bf32-4f2f-a14f-24105e9bcb87-kube-api-access-qdt6d\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.961425 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-inventory\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.968461 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-inventory\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.970492 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-ssh-key\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:50 crc kubenswrapper[4958]: I1008 08:46:50.985687 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdt6d\" (UniqueName: \"kubernetes.io/projected/b134e035-bf32-4f2f-a14f-24105e9bcb87-kube-api-access-qdt6d\") pod \"run-os-openstack-openstack-cell1-dnxt5\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:51 crc kubenswrapper[4958]: I1008 08:46:51.025895 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:46:51 crc kubenswrapper[4958]: I1008 08:46:51.601748 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-dnxt5"] Oct 08 08:46:51 crc kubenswrapper[4958]: I1008 08:46:51.622373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" event={"ID":"b134e035-bf32-4f2f-a14f-24105e9bcb87","Type":"ContainerStarted","Data":"d887b1770c11eaa06c12eef071aa1d6c6658585374368a6a4a762d41919be3a9"} Oct 08 08:46:52 crc kubenswrapper[4958]: I1008 08:46:52.633883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" event={"ID":"b134e035-bf32-4f2f-a14f-24105e9bcb87","Type":"ContainerStarted","Data":"507627f23817857cc80b0d737883a6859e92ec6cdcb3a5f42dd909b167d82a7e"} Oct 08 08:46:52 crc kubenswrapper[4958]: I1008 08:46:52.659546 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" podStartSLOduration=2.121343489 podStartE2EDuration="2.659519969s" podCreationTimestamp="2025-10-08 08:46:50 +0000 UTC" firstStartedPulling="2025-10-08 08:46:51.608144452 +0000 UTC m=+7954.737837053" lastFinishedPulling="2025-10-08 08:46:52.146320892 +0000 UTC m=+7955.276013533" observedRunningTime="2025-10-08 08:46:52.6507773 +0000 UTC m=+7955.780469911" watchObservedRunningTime="2025-10-08 08:46:52.659519969 +0000 UTC m=+7955.789212570" Oct 08 08:47:01 crc kubenswrapper[4958]: I1008 08:47:01.755462 4958 generic.go:334] "Generic (PLEG): container finished" podID="b134e035-bf32-4f2f-a14f-24105e9bcb87" containerID="507627f23817857cc80b0d737883a6859e92ec6cdcb3a5f42dd909b167d82a7e" exitCode=0 Oct 08 08:47:01 crc kubenswrapper[4958]: I1008 08:47:01.755571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" event={"ID":"b134e035-bf32-4f2f-a14f-24105e9bcb87","Type":"ContainerDied","Data":"507627f23817857cc80b0d737883a6859e92ec6cdcb3a5f42dd909b167d82a7e"} Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.431140 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.559444 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-ssh-key\") pod \"b134e035-bf32-4f2f-a14f-24105e9bcb87\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.559563 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-inventory\") pod \"b134e035-bf32-4f2f-a14f-24105e9bcb87\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.559639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdt6d\" (UniqueName: \"kubernetes.io/projected/b134e035-bf32-4f2f-a14f-24105e9bcb87-kube-api-access-qdt6d\") pod \"b134e035-bf32-4f2f-a14f-24105e9bcb87\" (UID: \"b134e035-bf32-4f2f-a14f-24105e9bcb87\") " Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.565766 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b134e035-bf32-4f2f-a14f-24105e9bcb87-kube-api-access-qdt6d" (OuterVolumeSpecName: "kube-api-access-qdt6d") pod "b134e035-bf32-4f2f-a14f-24105e9bcb87" (UID: "b134e035-bf32-4f2f-a14f-24105e9bcb87"). InnerVolumeSpecName "kube-api-access-qdt6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.616871 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-inventory" (OuterVolumeSpecName: "inventory") pod "b134e035-bf32-4f2f-a14f-24105e9bcb87" (UID: "b134e035-bf32-4f2f-a14f-24105e9bcb87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.618206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b134e035-bf32-4f2f-a14f-24105e9bcb87" (UID: "b134e035-bf32-4f2f-a14f-24105e9bcb87"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.663555 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.664754 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdt6d\" (UniqueName: \"kubernetes.io/projected/b134e035-bf32-4f2f-a14f-24105e9bcb87-kube-api-access-qdt6d\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.664800 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b134e035-bf32-4f2f-a14f-24105e9bcb87-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.795803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" event={"ID":"b134e035-bf32-4f2f-a14f-24105e9bcb87","Type":"ContainerDied","Data":"d887b1770c11eaa06c12eef071aa1d6c6658585374368a6a4a762d41919be3a9"} Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.796264 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d887b1770c11eaa06c12eef071aa1d6c6658585374368a6a4a762d41919be3a9" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.796311 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-dnxt5" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.899148 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-958gs"] Oct 08 08:47:03 crc kubenswrapper[4958]: E1008 08:47:03.899894 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b134e035-bf32-4f2f-a14f-24105e9bcb87" containerName="run-os-openstack-openstack-cell1" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.899924 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b134e035-bf32-4f2f-a14f-24105e9bcb87" containerName="run-os-openstack-openstack-cell1" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.900387 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b134e035-bf32-4f2f-a14f-24105e9bcb87" containerName="run-os-openstack-openstack-cell1" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.901618 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.904390 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.904541 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.905078 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.905942 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.918983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-958gs"] Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.933633 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkx7t"] Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.938050 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:03 crc kubenswrapper[4958]: I1008 08:47:03.962092 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkx7t"] Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.075695 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-utilities\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.075864 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbksp\" (UniqueName: \"kubernetes.io/projected/de48192a-1fb9-4e3b-a917-335a2be02e36-kube-api-access-tbksp\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.075916 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.076039 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-catalog-content\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.076110 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnst\" (UniqueName: \"kubernetes.io/projected/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-kube-api-access-zqnst\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.076527 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-inventory\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.109248 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q42jz"] Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.111345 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.128484 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q42jz"] Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.179277 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-utilities\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.179383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbksp\" (UniqueName: \"kubernetes.io/projected/de48192a-1fb9-4e3b-a917-335a2be02e36-kube-api-access-tbksp\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.179425 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.179457 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-catalog-content\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.179493 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnst\" (UniqueName: \"kubernetes.io/projected/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-kube-api-access-zqnst\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.179624 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-inventory\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.180368 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-catalog-content\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.181083 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-utilities\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.184807 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-inventory\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.185805 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.200895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnst\" (UniqueName: \"kubernetes.io/projected/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-kube-api-access-zqnst\") pod \"reboot-os-openstack-openstack-cell1-958gs\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.205047 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbksp\" (UniqueName: \"kubernetes.io/projected/de48192a-1fb9-4e3b-a917-335a2be02e36-kube-api-access-tbksp\") pod \"certified-operators-vkx7t\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.228540 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.282190 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.283635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-utilities\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.283810 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gqk\" (UniqueName: \"kubernetes.io/projected/6dbd1eb2-c264-4f12-b437-8203ec9db720-kube-api-access-r8gqk\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.284006 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-catalog-content\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.386350 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-utilities\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.386457 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gqk\" (UniqueName: \"kubernetes.io/projected/6dbd1eb2-c264-4f12-b437-8203ec9db720-kube-api-access-r8gqk\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.386572 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-catalog-content\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.387151 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-catalog-content\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.387442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-utilities\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.406798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gqk\" (UniqueName: \"kubernetes.io/projected/6dbd1eb2-c264-4f12-b437-8203ec9db720-kube-api-access-r8gqk\") pod \"redhat-marketplace-q42jz\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.519482 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:04 crc kubenswrapper[4958]: I1008 08:47:04.920829 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkx7t"] Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.024891 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-958gs"] Oct 08 08:47:05 crc kubenswrapper[4958]: W1008 08:47:05.037702 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a10c7d3_c479_4432_9d5c_89b4c5eae5ae.slice/crio-473fa7af15aeb677b4331ee59e5081c0086de06bcff9c578fbcc11e431686116 WatchSource:0}: Error finding container 473fa7af15aeb677b4331ee59e5081c0086de06bcff9c578fbcc11e431686116: Status 404 returned error can't find the container with id 473fa7af15aeb677b4331ee59e5081c0086de06bcff9c578fbcc11e431686116 Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.128554 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q42jz"] Oct 08 08:47:05 crc kubenswrapper[4958]: W1008 08:47:05.135868 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbd1eb2_c264_4f12_b437_8203ec9db720.slice/crio-4eebc16099d67ec2085341d58e1bcb1c9f21fb7b46f10cdca07f1c8e254ce67e WatchSource:0}: Error finding container 4eebc16099d67ec2085341d58e1bcb1c9f21fb7b46f10cdca07f1c8e254ce67e: Status 404 returned error can't find the container with id 4eebc16099d67ec2085341d58e1bcb1c9f21fb7b46f10cdca07f1c8e254ce67e Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.816129 4958 generic.go:334] "Generic (PLEG): container finished" podID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerID="b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf" exitCode=0 Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.816213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkx7t" event={"ID":"de48192a-1fb9-4e3b-a917-335a2be02e36","Type":"ContainerDied","Data":"b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf"} Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.816521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkx7t" event={"ID":"de48192a-1fb9-4e3b-a917-335a2be02e36","Type":"ContainerStarted","Data":"bb7cfc483fc5da5829e0dfcf8084b6ff74514ca498a2b9ddf14c2868c079040f"} Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.819855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" event={"ID":"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae","Type":"ContainerStarted","Data":"473fa7af15aeb677b4331ee59e5081c0086de06bcff9c578fbcc11e431686116"} Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.822289 4958 generic.go:334] "Generic (PLEG): container finished" podID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerID="bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334" exitCode=0 Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.822358 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q42jz" event={"ID":"6dbd1eb2-c264-4f12-b437-8203ec9db720","Type":"ContainerDied","Data":"bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334"} Oct 08 08:47:05 crc kubenswrapper[4958]: I1008 08:47:05.822419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q42jz" event={"ID":"6dbd1eb2-c264-4f12-b437-8203ec9db720","Type":"ContainerStarted","Data":"4eebc16099d67ec2085341d58e1bcb1c9f21fb7b46f10cdca07f1c8e254ce67e"} Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.314221 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jcqxc"] Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.318754 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.381262 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jcqxc"] Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.455147 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-utilities\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.455253 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-catalog-content\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.455320 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd99w\" (UniqueName: \"kubernetes.io/projected/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-kube-api-access-sd99w\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.557535 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd99w\" (UniqueName: \"kubernetes.io/projected/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-kube-api-access-sd99w\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.557648 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-utilities\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.557727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-catalog-content\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.558158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-catalog-content\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.558729 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-utilities\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.579241 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd99w\" (UniqueName: \"kubernetes.io/projected/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-kube-api-access-sd99w\") pod \"redhat-operators-jcqxc\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.650169 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.846626 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.847004 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.847069 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.848072 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9792847903de8882201e5c6f9c09f7860e8eac25dd2466f7b851b233acc9c4e0"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.848154 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://9792847903de8882201e5c6f9c09f7860e8eac25dd2466f7b851b233acc9c4e0" gracePeriod=600 Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.854882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" event={"ID":"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae","Type":"ContainerStarted","Data":"20c0a8a3f8a54d178db5ba79b4404281b1998e6a2dd67667b8e75de3b91eae2f"} Oct 08 08:47:06 crc kubenswrapper[4958]: I1008 08:47:06.881313 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" podStartSLOduration=3.331162478 podStartE2EDuration="3.881298284s" podCreationTimestamp="2025-10-08 08:47:03 +0000 UTC" firstStartedPulling="2025-10-08 08:47:05.040295845 +0000 UTC m=+7968.169988436" lastFinishedPulling="2025-10-08 08:47:05.590431631 +0000 UTC m=+7968.720124242" observedRunningTime="2025-10-08 08:47:06.869803341 +0000 UTC m=+7969.999495952" watchObservedRunningTime="2025-10-08 08:47:06.881298284 +0000 UTC m=+7970.010990885" Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.143562 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jcqxc"] Oct 08 08:47:07 crc kubenswrapper[4958]: W1008 08:47:07.160297 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ef9b410_d82f_4b33_becd_e7dd5ef2a3ca.slice/crio-150b77fd96edd258e8f0ad638a97c31629722143767888f94fab3d23a8979255 WatchSource:0}: Error finding container 150b77fd96edd258e8f0ad638a97c31629722143767888f94fab3d23a8979255: Status 404 returned error can't find the container with id 150b77fd96edd258e8f0ad638a97c31629722143767888f94fab3d23a8979255 Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.880389 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="9792847903de8882201e5c6f9c09f7860e8eac25dd2466f7b851b233acc9c4e0" exitCode=0 Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.880461 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"9792847903de8882201e5c6f9c09f7860e8eac25dd2466f7b851b233acc9c4e0"} Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.881206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd"} Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.881232 4958 scope.go:117] "RemoveContainer" containerID="347331262e3ee59ce2f8873ce09d46e238b4f6026ea405739efd3adee6f46f94" Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.886326 4958 generic.go:334] "Generic (PLEG): container finished" podID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerID="5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3" exitCode=0 Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.886450 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkx7t" event={"ID":"de48192a-1fb9-4e3b-a917-335a2be02e36","Type":"ContainerDied","Data":"5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3"} Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.889116 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerStarted","Data":"934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e"} Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.889168 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerStarted","Data":"150b77fd96edd258e8f0ad638a97c31629722143767888f94fab3d23a8979255"} Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.892856 4958 generic.go:334] "Generic (PLEG): container finished" podID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerID="85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50" exitCode=0 Oct 08 08:47:07 crc kubenswrapper[4958]: I1008 08:47:07.893515 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q42jz" event={"ID":"6dbd1eb2-c264-4f12-b437-8203ec9db720","Type":"ContainerDied","Data":"85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50"} Oct 08 08:47:08 crc kubenswrapper[4958]: I1008 08:47:08.909047 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkx7t" event={"ID":"de48192a-1fb9-4e3b-a917-335a2be02e36","Type":"ContainerStarted","Data":"bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731"} Oct 08 08:47:08 crc kubenswrapper[4958]: I1008 08:47:08.915044 4958 generic.go:334] "Generic (PLEG): container finished" podID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerID="934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e" exitCode=0 Oct 08 08:47:08 crc kubenswrapper[4958]: I1008 08:47:08.915161 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerDied","Data":"934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e"} Oct 08 08:47:08 crc kubenswrapper[4958]: I1008 08:47:08.926313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q42jz" event={"ID":"6dbd1eb2-c264-4f12-b437-8203ec9db720","Type":"ContainerStarted","Data":"90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1"} Oct 08 08:47:08 crc kubenswrapper[4958]: I1008 08:47:08.947557 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkx7t" podStartSLOduration=3.352571264 podStartE2EDuration="5.947529473s" podCreationTimestamp="2025-10-08 08:47:03 +0000 UTC" firstStartedPulling="2025-10-08 08:47:05.819811372 +0000 UTC m=+7968.949504013" lastFinishedPulling="2025-10-08 08:47:08.414769581 +0000 UTC m=+7971.544462222" observedRunningTime="2025-10-08 08:47:08.932326008 +0000 UTC m=+7972.062018619" watchObservedRunningTime="2025-10-08 08:47:08.947529473 +0000 UTC m=+7972.077222114" Oct 08 08:47:08 crc kubenswrapper[4958]: I1008 08:47:08.983695 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q42jz" podStartSLOduration=2.4145630049999998 podStartE2EDuration="4.983665689s" podCreationTimestamp="2025-10-08 08:47:04 +0000 UTC" firstStartedPulling="2025-10-08 08:47:05.824238512 +0000 UTC m=+7968.953931113" lastFinishedPulling="2025-10-08 08:47:08.393341186 +0000 UTC m=+7971.523033797" observedRunningTime="2025-10-08 08:47:08.97490698 +0000 UTC m=+7972.104599601" watchObservedRunningTime="2025-10-08 08:47:08.983665689 +0000 UTC m=+7972.113358300" Oct 08 08:47:10 crc kubenswrapper[4958]: I1008 08:47:10.964404 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerStarted","Data":"f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac"} Oct 08 08:47:13 crc kubenswrapper[4958]: I1008 08:47:13.995292 4958 generic.go:334] "Generic (PLEG): container finished" podID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerID="f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac" exitCode=0 Oct 08 08:47:13 crc kubenswrapper[4958]: I1008 08:47:13.995356 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerDied","Data":"f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac"} Oct 08 08:47:14 crc kubenswrapper[4958]: I1008 08:47:14.283133 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:14 crc kubenswrapper[4958]: I1008 08:47:14.283171 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:14 crc kubenswrapper[4958]: I1008 08:47:14.359188 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:14 crc kubenswrapper[4958]: I1008 08:47:14.521143 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:14 crc kubenswrapper[4958]: I1008 08:47:14.522138 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:14 crc kubenswrapper[4958]: I1008 08:47:14.572551 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:15 crc kubenswrapper[4958]: I1008 08:47:15.012186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerStarted","Data":"ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea"} Oct 08 08:47:15 crc kubenswrapper[4958]: I1008 08:47:15.037906 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jcqxc" podStartSLOduration=3.532881721 podStartE2EDuration="9.03788924s" podCreationTimestamp="2025-10-08 08:47:06 +0000 UTC" firstStartedPulling="2025-10-08 08:47:08.923204939 +0000 UTC m=+7972.052897550" lastFinishedPulling="2025-10-08 08:47:14.428212468 +0000 UTC m=+7977.557905069" observedRunningTime="2025-10-08 08:47:15.03422845 +0000 UTC m=+7978.163921051" watchObservedRunningTime="2025-10-08 08:47:15.03788924 +0000 UTC m=+7978.167581831" Oct 08 08:47:15 crc kubenswrapper[4958]: I1008 08:47:15.076166 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:15 crc kubenswrapper[4958]: I1008 08:47:15.100098 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:16 crc kubenswrapper[4958]: I1008 08:47:16.651293 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:16 crc kubenswrapper[4958]: I1008 08:47:16.651699 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:16 crc kubenswrapper[4958]: I1008 08:47:16.708994 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkx7t"] Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.036233 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkx7t" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="registry-server" containerID="cri-o://bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731" gracePeriod=2 Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.305453 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q42jz"] Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.306031 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q42jz" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="registry-server" containerID="cri-o://90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1" gracePeriod=2 Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.722451 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.729846 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jcqxc" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="registry-server" probeResult="failure" output=< Oct 08 08:47:17 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:47:17 crc kubenswrapper[4958]: > Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.832433 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-utilities\") pod \"de48192a-1fb9-4e3b-a917-335a2be02e36\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.832885 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbksp\" (UniqueName: \"kubernetes.io/projected/de48192a-1fb9-4e3b-a917-335a2be02e36-kube-api-access-tbksp\") pod \"de48192a-1fb9-4e3b-a917-335a2be02e36\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.833224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-catalog-content\") pod \"de48192a-1fb9-4e3b-a917-335a2be02e36\" (UID: \"de48192a-1fb9-4e3b-a917-335a2be02e36\") " Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.840879 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de48192a-1fb9-4e3b-a917-335a2be02e36-kube-api-access-tbksp" (OuterVolumeSpecName: "kube-api-access-tbksp") pod "de48192a-1fb9-4e3b-a917-335a2be02e36" (UID: "de48192a-1fb9-4e3b-a917-335a2be02e36"). InnerVolumeSpecName "kube-api-access-tbksp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.847263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-utilities" (OuterVolumeSpecName: "utilities") pod "de48192a-1fb9-4e3b-a917-335a2be02e36" (UID: "de48192a-1fb9-4e3b-a917-335a2be02e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.879563 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de48192a-1fb9-4e3b-a917-335a2be02e36" (UID: "de48192a-1fb9-4e3b-a917-335a2be02e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.911785 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.936618 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.936667 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbksp\" (UniqueName: \"kubernetes.io/projected/de48192a-1fb9-4e3b-a917-335a2be02e36-kube-api-access-tbksp\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:17 crc kubenswrapper[4958]: I1008 08:47:17.936684 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de48192a-1fb9-4e3b-a917-335a2be02e36-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.037540 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gqk\" (UniqueName: \"kubernetes.io/projected/6dbd1eb2-c264-4f12-b437-8203ec9db720-kube-api-access-r8gqk\") pod \"6dbd1eb2-c264-4f12-b437-8203ec9db720\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.037637 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-utilities\") pod \"6dbd1eb2-c264-4f12-b437-8203ec9db720\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.037697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-catalog-content\") pod \"6dbd1eb2-c264-4f12-b437-8203ec9db720\" (UID: \"6dbd1eb2-c264-4f12-b437-8203ec9db720\") " Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.038360 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-utilities" (OuterVolumeSpecName: "utilities") pod "6dbd1eb2-c264-4f12-b437-8203ec9db720" (UID: "6dbd1eb2-c264-4f12-b437-8203ec9db720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.042118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbd1eb2-c264-4f12-b437-8203ec9db720-kube-api-access-r8gqk" (OuterVolumeSpecName: "kube-api-access-r8gqk") pod "6dbd1eb2-c264-4f12-b437-8203ec9db720" (UID: "6dbd1eb2-c264-4f12-b437-8203ec9db720"). InnerVolumeSpecName "kube-api-access-r8gqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.050582 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dbd1eb2-c264-4f12-b437-8203ec9db720" (UID: "6dbd1eb2-c264-4f12-b437-8203ec9db720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.053324 4958 generic.go:334] "Generic (PLEG): container finished" podID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerID="bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731" exitCode=0 Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.053400 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkx7t" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.053397 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkx7t" event={"ID":"de48192a-1fb9-4e3b-a917-335a2be02e36","Type":"ContainerDied","Data":"bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731"} Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.053481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkx7t" event={"ID":"de48192a-1fb9-4e3b-a917-335a2be02e36","Type":"ContainerDied","Data":"bb7cfc483fc5da5829e0dfcf8084b6ff74514ca498a2b9ddf14c2868c079040f"} Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.053506 4958 scope.go:117] "RemoveContainer" containerID="bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.060829 4958 generic.go:334] "Generic (PLEG): container finished" podID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerID="90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1" exitCode=0 Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.060862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q42jz" event={"ID":"6dbd1eb2-c264-4f12-b437-8203ec9db720","Type":"ContainerDied","Data":"90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1"} Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.060883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q42jz" event={"ID":"6dbd1eb2-c264-4f12-b437-8203ec9db720","Type":"ContainerDied","Data":"4eebc16099d67ec2085341d58e1bcb1c9f21fb7b46f10cdca07f1c8e254ce67e"} Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.060954 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q42jz" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.090278 4958 scope.go:117] "RemoveContainer" containerID="5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.115375 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkx7t"] Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.127172 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkx7t"] Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.140451 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gqk\" (UniqueName: \"kubernetes.io/projected/6dbd1eb2-c264-4f12-b437-8203ec9db720-kube-api-access-r8gqk\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.140485 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.140495 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dbd1eb2-c264-4f12-b437-8203ec9db720-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.148013 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q42jz"] Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.151798 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q42jz"] Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.153131 4958 scope.go:117] "RemoveContainer" containerID="b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.180265 4958 scope.go:117] "RemoveContainer" containerID="bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731" Oct 08 08:47:18 crc kubenswrapper[4958]: E1008 08:47:18.183414 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731\": container with ID starting with bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731 not found: ID does not exist" containerID="bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.183460 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731"} err="failed to get container status \"bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731\": rpc error: code = NotFound desc = could not find container \"bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731\": container with ID starting with bdff88540af7707050c4deac059eb9236be076550f249e7c806f96aa1a465731 not found: ID does not exist" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.183486 4958 scope.go:117] "RemoveContainer" containerID="5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3" Oct 08 08:47:18 crc kubenswrapper[4958]: E1008 08:47:18.184070 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3\": container with ID starting with 5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3 not found: ID does not exist" containerID="5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.184104 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3"} err="failed to get container status \"5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3\": rpc error: code = NotFound desc = could not find container \"5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3\": container with ID starting with 5e3951bcf3edf708883ef9525a1638035016d108947b3dcf87aaba25978f37d3 not found: ID does not exist" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.184131 4958 scope.go:117] "RemoveContainer" containerID="b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf" Oct 08 08:47:18 crc kubenswrapper[4958]: E1008 08:47:18.184448 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf\": container with ID starting with b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf not found: ID does not exist" containerID="b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.184527 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf"} err="failed to get container status \"b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf\": rpc error: code = NotFound desc = could not find container \"b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf\": container with ID starting with b8fefe4f4f8f30a8875c8db472e1aff09ee82a2ee6cf094034c8da8d9038efcf not found: ID does not exist" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.184561 4958 scope.go:117] "RemoveContainer" containerID="90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.245352 4958 scope.go:117] "RemoveContainer" containerID="85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.280656 4958 scope.go:117] "RemoveContainer" containerID="bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.316151 4958 scope.go:117] "RemoveContainer" containerID="90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1" Oct 08 08:47:18 crc kubenswrapper[4958]: E1008 08:47:18.316629 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1\": container with ID starting with 90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1 not found: ID does not exist" containerID="90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.316671 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1"} err="failed to get container status \"90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1\": rpc error: code = NotFound desc = could not find container \"90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1\": container with ID starting with 90f0d12de4a1340f80007d366ee393ddefeea836f6daa4f9e349f75e929754b1 not found: ID does not exist" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.316698 4958 scope.go:117] "RemoveContainer" containerID="85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50" Oct 08 08:47:18 crc kubenswrapper[4958]: E1008 08:47:18.317698 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50\": container with ID starting with 85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50 not found: ID does not exist" containerID="85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.317743 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50"} err="failed to get container status \"85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50\": rpc error: code = NotFound desc = could not find container \"85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50\": container with ID starting with 85dc45a090b7aa8e3ecbaa2fdaa460ae3b84d8ab87687ce739be5ed51d612c50 not found: ID does not exist" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.317790 4958 scope.go:117] "RemoveContainer" containerID="bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334" Oct 08 08:47:18 crc kubenswrapper[4958]: E1008 08:47:18.318286 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334\": container with ID starting with bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334 not found: ID does not exist" containerID="bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334" Oct 08 08:47:18 crc kubenswrapper[4958]: I1008 08:47:18.318321 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334"} err="failed to get container status \"bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334\": rpc error: code = NotFound desc = could not find container \"bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334\": container with ID starting with bc1cc6fed630cc643c8eeb5bd283a132e20ffa7957711a918104bd45453a6334 not found: ID does not exist" Oct 08 08:47:19 crc kubenswrapper[4958]: I1008 08:47:19.599377 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" path="/var/lib/kubelet/pods/6dbd1eb2-c264-4f12-b437-8203ec9db720/volumes" Oct 08 08:47:19 crc kubenswrapper[4958]: I1008 08:47:19.600635 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" path="/var/lib/kubelet/pods/de48192a-1fb9-4e3b-a917-335a2be02e36/volumes" Oct 08 08:47:22 crc kubenswrapper[4958]: I1008 08:47:22.115436 4958 generic.go:334] "Generic (PLEG): container finished" podID="1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" containerID="20c0a8a3f8a54d178db5ba79b4404281b1998e6a2dd67667b8e75de3b91eae2f" exitCode=0 Oct 08 08:47:22 crc kubenswrapper[4958]: I1008 08:47:22.115489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" event={"ID":"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae","Type":"ContainerDied","Data":"20c0a8a3f8a54d178db5ba79b4404281b1998e6a2dd67667b8e75de3b91eae2f"} Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.585674 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.671602 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnst\" (UniqueName: \"kubernetes.io/projected/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-kube-api-access-zqnst\") pod \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.671764 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-ssh-key\") pod \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.671881 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-inventory\") pod \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\" (UID: \"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae\") " Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.677632 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-kube-api-access-zqnst" (OuterVolumeSpecName: "kube-api-access-zqnst") pod "1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" (UID: "1a10c7d3-c479-4432-9d5c-89b4c5eae5ae"). InnerVolumeSpecName "kube-api-access-zqnst". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.706005 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-inventory" (OuterVolumeSpecName: "inventory") pod "1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" (UID: "1a10c7d3-c479-4432-9d5c-89b4c5eae5ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.716855 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" (UID: "1a10c7d3-c479-4432-9d5c-89b4c5eae5ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.774458 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.774684 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:23 crc kubenswrapper[4958]: I1008 08:47:23.774775 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqnst\" (UniqueName: \"kubernetes.io/projected/1a10c7d3-c479-4432-9d5c-89b4c5eae5ae-kube-api-access-zqnst\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.140925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" event={"ID":"1a10c7d3-c479-4432-9d5c-89b4c5eae5ae","Type":"ContainerDied","Data":"473fa7af15aeb677b4331ee59e5081c0086de06bcff9c578fbcc11e431686116"} Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.141246 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="473fa7af15aeb677b4331ee59e5081c0086de06bcff9c578fbcc11e431686116" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.141103 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-958gs" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.282301 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-l6x9j"] Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.282869 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="extract-content" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.282894 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="extract-content" Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.282915 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="registry-server" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.282928 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="registry-server" Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.282971 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="extract-utilities" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.282981 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="extract-utilities" Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.283002 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="extract-content" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283012 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="extract-content" Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.283039 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="extract-utilities" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283047 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="extract-utilities" Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.283058 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" containerName="reboot-os-openstack-openstack-cell1" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283066 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" containerName="reboot-os-openstack-openstack-cell1" Oct 08 08:47:24 crc kubenswrapper[4958]: E1008 08:47:24.283082 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="registry-server" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283089 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="registry-server" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283354 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="de48192a-1fb9-4e3b-a917-335a2be02e36" containerName="registry-server" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283384 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a10c7d3-c479-4432-9d5c-89b4c5eae5ae" containerName="reboot-os-openstack-openstack-cell1" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.283403 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbd1eb2-c264-4f12-b437-8203ec9db720" containerName="registry-server" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.284402 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.287127 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.287363 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.287498 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.287613 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.287873 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.288069 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.288214 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.288429 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.295893 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-l6x9j"] Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.386900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.386982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387416 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387655 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387714 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387839 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ssh-key\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.387994 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.388051 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.388093 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-inventory\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.388413 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.388484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2r79\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-kube-api-access-j2r79\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489541 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ssh-key\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489565 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-inventory\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489702 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2r79\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-kube-api-access-j2r79\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489739 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489776 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489878 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489926 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.489964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.496369 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ssh-key\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.496450 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.496392 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-inventory\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.497053 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.497188 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.497319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.497686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.498011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.498359 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.498514 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.498527 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.500176 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.500450 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.504890 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.508869 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2r79\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-kube-api-access-j2r79\") pod \"install-certs-openstack-openstack-cell1-l6x9j\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:24 crc kubenswrapper[4958]: I1008 08:47:24.604768 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:47:25 crc kubenswrapper[4958]: I1008 08:47:25.191999 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-l6x9j"] Oct 08 08:47:26 crc kubenswrapper[4958]: I1008 08:47:26.161537 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" event={"ID":"836d37ec-3c7c-406c-85f5-0f825f3ea402","Type":"ContainerStarted","Data":"ea1afd4d6c2f07650339e41fdae85c3200c8f1ad896d299fe977c09fd4257753"} Oct 08 08:47:26 crc kubenswrapper[4958]: I1008 08:47:26.162364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" event={"ID":"836d37ec-3c7c-406c-85f5-0f825f3ea402","Type":"ContainerStarted","Data":"314116b1ec40928364feb2561cee6b583a03c0912561bb0d9057576476c65f27"} Oct 08 08:47:26 crc kubenswrapper[4958]: I1008 08:47:26.202124 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" podStartSLOduration=1.733387394 podStartE2EDuration="2.202100248s" podCreationTimestamp="2025-10-08 08:47:24 +0000 UTC" firstStartedPulling="2025-10-08 08:47:25.194828384 +0000 UTC m=+7988.324520985" lastFinishedPulling="2025-10-08 08:47:25.663541198 +0000 UTC m=+7988.793233839" observedRunningTime="2025-10-08 08:47:26.191024336 +0000 UTC m=+7989.320716937" watchObservedRunningTime="2025-10-08 08:47:26.202100248 +0000 UTC m=+7989.331792869" Oct 08 08:47:26 crc kubenswrapper[4958]: I1008 08:47:26.711534 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:26 crc kubenswrapper[4958]: I1008 08:47:26.763280 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:26 crc kubenswrapper[4958]: I1008 08:47:26.981149 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jcqxc"] Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.183294 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jcqxc" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="registry-server" containerID="cri-o://ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea" gracePeriod=2 Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.750585 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.794751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-utilities\") pod \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.795044 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-catalog-content\") pod \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.795098 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd99w\" (UniqueName: \"kubernetes.io/projected/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-kube-api-access-sd99w\") pod \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\" (UID: \"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca\") " Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.795789 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-utilities" (OuterVolumeSpecName: "utilities") pod "4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" (UID: "4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.801287 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-kube-api-access-sd99w" (OuterVolumeSpecName: "kube-api-access-sd99w") pod "4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" (UID: "4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca"). InnerVolumeSpecName "kube-api-access-sd99w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.884741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" (UID: "4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.897980 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.898074 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:28 crc kubenswrapper[4958]: I1008 08:47:28.898133 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd99w\" (UniqueName: \"kubernetes.io/projected/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca-kube-api-access-sd99w\") on node \"crc\" DevicePath \"\"" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.202920 4958 generic.go:334] "Generic (PLEG): container finished" podID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerID="ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea" exitCode=0 Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.202972 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerDied","Data":"ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea"} Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.202999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jcqxc" event={"ID":"4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca","Type":"ContainerDied","Data":"150b77fd96edd258e8f0ad638a97c31629722143767888f94fab3d23a8979255"} Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.203015 4958 scope.go:117] "RemoveContainer" containerID="ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.203068 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jcqxc" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.244874 4958 scope.go:117] "RemoveContainer" containerID="f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.253432 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jcqxc"] Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.261387 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jcqxc"] Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.271024 4958 scope.go:117] "RemoveContainer" containerID="934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.359476 4958 scope.go:117] "RemoveContainer" containerID="ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea" Oct 08 08:47:29 crc kubenswrapper[4958]: E1008 08:47:29.359981 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea\": container with ID starting with ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea not found: ID does not exist" containerID="ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.360020 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea"} err="failed to get container status \"ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea\": rpc error: code = NotFound desc = could not find container \"ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea\": container with ID starting with ab92e32ed3437cd7b181ab6e6effa5ef739c6e097fc1c9213ac5a1522a6049ea not found: ID does not exist" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.360049 4958 scope.go:117] "RemoveContainer" containerID="f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac" Oct 08 08:47:29 crc kubenswrapper[4958]: E1008 08:47:29.360343 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac\": container with ID starting with f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac not found: ID does not exist" containerID="f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.360365 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac"} err="failed to get container status \"f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac\": rpc error: code = NotFound desc = could not find container \"f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac\": container with ID starting with f218788e13f21f80b8fa8b8403c6fff9d77ef3881560b2b9b7a7e77cc388bfac not found: ID does not exist" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.360380 4958 scope.go:117] "RemoveContainer" containerID="934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e" Oct 08 08:47:29 crc kubenswrapper[4958]: E1008 08:47:29.360784 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e\": container with ID starting with 934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e not found: ID does not exist" containerID="934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.360809 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e"} err="failed to get container status \"934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e\": rpc error: code = NotFound desc = could not find container \"934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e\": container with ID starting with 934466524c93369cde3b3e7a3c41d229c6bc9dbc2bd623742b332f789c99f06e not found: ID does not exist" Oct 08 08:47:29 crc kubenswrapper[4958]: I1008 08:47:29.589114 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" path="/var/lib/kubelet/pods/4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca/volumes" Oct 08 08:48:05 crc kubenswrapper[4958]: E1008 08:48:05.075101 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod836d37ec_3c7c_406c_85f5_0f825f3ea402.slice/crio-ea1afd4d6c2f07650339e41fdae85c3200c8f1ad896d299fe977c09fd4257753.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod836d37ec_3c7c_406c_85f5_0f825f3ea402.slice/crio-conmon-ea1afd4d6c2f07650339e41fdae85c3200c8f1ad896d299fe977c09fd4257753.scope\": RecentStats: unable to find data in memory cache]" Oct 08 08:48:05 crc kubenswrapper[4958]: I1008 08:48:05.713887 4958 generic.go:334] "Generic (PLEG): container finished" podID="836d37ec-3c7c-406c-85f5-0f825f3ea402" containerID="ea1afd4d6c2f07650339e41fdae85c3200c8f1ad896d299fe977c09fd4257753" exitCode=0 Oct 08 08:48:05 crc kubenswrapper[4958]: I1008 08:48:05.713930 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" event={"ID":"836d37ec-3c7c-406c-85f5-0f825f3ea402","Type":"ContainerDied","Data":"ea1afd4d6c2f07650339e41fdae85c3200c8f1ad896d299fe977c09fd4257753"} Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.273535 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322596 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-sriov-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322661 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-bootstrap-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322698 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-libvirt-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322756 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-telemetry-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322786 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-telemetry-default-certs-0\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322818 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2r79\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-kube-api-access-j2r79\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322852 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-ovn-default-certs-0\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322874 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ssh-key\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322919 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-inventory\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.322963 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-neutron-metadata-default-certs-0\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.323018 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-libvirt-default-certs-0\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.323054 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-metadata-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.323090 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-dhcp-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.323115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ovn-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.323153 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-nova-combined-ca-bundle\") pod \"836d37ec-3c7c-406c-85f5-0f825f3ea402\" (UID: \"836d37ec-3c7c-406c-85f5-0f825f3ea402\") " Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.330040 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-kube-api-access-j2r79" (OuterVolumeSpecName: "kube-api-access-j2r79") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "kube-api-access-j2r79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.335290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.338217 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.340628 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.343403 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.345168 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.342387 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.351292 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.351876 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.369429 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.373784 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.421840 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.421888 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425333 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425362 4958 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425372 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425383 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2r79\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-kube-api-access-j2r79\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425393 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425403 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425411 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/836d37ec-3c7c-406c-85f5-0f825f3ea402-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425420 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425429 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425437 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425445 4958 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425453 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.425462 4958 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.447164 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.485106 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-inventory" (OuterVolumeSpecName: "inventory") pod "836d37ec-3c7c-406c-85f5-0f825f3ea402" (UID: "836d37ec-3c7c-406c-85f5-0f825f3ea402"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.529772 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.529805 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/836d37ec-3c7c-406c-85f5-0f825f3ea402-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.738121 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" event={"ID":"836d37ec-3c7c-406c-85f5-0f825f3ea402","Type":"ContainerDied","Data":"314116b1ec40928364feb2561cee6b583a03c0912561bb0d9057576476c65f27"} Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.738545 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314116b1ec40928364feb2561cee6b583a03c0912561bb0d9057576476c65f27" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.738247 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l6x9j" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.838594 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-fxgb2"] Oct 08 08:48:07 crc kubenswrapper[4958]: E1008 08:48:07.839409 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="registry-server" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.839515 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="registry-server" Oct 08 08:48:07 crc kubenswrapper[4958]: E1008 08:48:07.839622 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="extract-content" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.839700 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="extract-content" Oct 08 08:48:07 crc kubenswrapper[4958]: E1008 08:48:07.839796 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836d37ec-3c7c-406c-85f5-0f825f3ea402" containerName="install-certs-openstack-openstack-cell1" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.839862 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="836d37ec-3c7c-406c-85f5-0f825f3ea402" containerName="install-certs-openstack-openstack-cell1" Oct 08 08:48:07 crc kubenswrapper[4958]: E1008 08:48:07.839967 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="extract-utilities" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.840036 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="extract-utilities" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.840325 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="836d37ec-3c7c-406c-85f5-0f825f3ea402" containerName="install-certs-openstack-openstack-cell1" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.840423 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef9b410-d82f-4b33-becd-e7dd5ef2a3ca" containerName="registry-server" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.841300 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.844257 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.844388 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.844670 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.844682 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.851293 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-fxgb2"] Oct 08 08:48:07 crc kubenswrapper[4958]: I1008 08:48:07.853570 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.039740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.040722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-inventory\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.041026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ssh-key\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.041328 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghdf\" (UniqueName: \"kubernetes.io/projected/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-kube-api-access-vghdf\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.041539 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.144232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.144587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-inventory\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.144811 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ssh-key\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.145138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghdf\" (UniqueName: \"kubernetes.io/projected/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-kube-api-access-vghdf\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.145328 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.145828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.154073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.154494 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ssh-key\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.155703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-inventory\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.175852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghdf\" (UniqueName: \"kubernetes.io/projected/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-kube-api-access-vghdf\") pod \"ovn-openstack-openstack-cell1-fxgb2\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:08 crc kubenswrapper[4958]: I1008 08:48:08.465148 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:48:09 crc kubenswrapper[4958]: I1008 08:48:09.136734 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-fxgb2"] Oct 08 08:48:09 crc kubenswrapper[4958]: I1008 08:48:09.768315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" event={"ID":"a1d1afc2-f4df-4102-9be5-a39b40b9ee65","Type":"ContainerStarted","Data":"b484fb143cf8f8a7982b188b0bd68dfd9e86e3e9d652f9aed51ce67de472352c"} Oct 08 08:48:10 crc kubenswrapper[4958]: I1008 08:48:10.779667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" event={"ID":"a1d1afc2-f4df-4102-9be5-a39b40b9ee65","Type":"ContainerStarted","Data":"fddc8a1d04dca1bc760c9848cfb77ddf579e7cfe68dd4549005f04876ac65101"} Oct 08 08:49:18 crc kubenswrapper[4958]: I1008 08:49:18.651160 4958 generic.go:334] "Generic (PLEG): container finished" podID="a1d1afc2-f4df-4102-9be5-a39b40b9ee65" containerID="fddc8a1d04dca1bc760c9848cfb77ddf579e7cfe68dd4549005f04876ac65101" exitCode=0 Oct 08 08:49:18 crc kubenswrapper[4958]: I1008 08:49:18.651316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" event={"ID":"a1d1afc2-f4df-4102-9be5-a39b40b9ee65","Type":"ContainerDied","Data":"fddc8a1d04dca1bc760c9848cfb77ddf579e7cfe68dd4549005f04876ac65101"} Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.140868 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.285275 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovncontroller-config-0\") pod \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.286011 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-inventory\") pod \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.286194 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vghdf\" (UniqueName: \"kubernetes.io/projected/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-kube-api-access-vghdf\") pod \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.286413 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ssh-key\") pod \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.286762 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovn-combined-ca-bundle\") pod \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\" (UID: \"a1d1afc2-f4df-4102-9be5-a39b40b9ee65\") " Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.292030 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a1d1afc2-f4df-4102-9be5-a39b40b9ee65" (UID: "a1d1afc2-f4df-4102-9be5-a39b40b9ee65"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.294346 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-kube-api-access-vghdf" (OuterVolumeSpecName: "kube-api-access-vghdf") pod "a1d1afc2-f4df-4102-9be5-a39b40b9ee65" (UID: "a1d1afc2-f4df-4102-9be5-a39b40b9ee65"). InnerVolumeSpecName "kube-api-access-vghdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.323369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-inventory" (OuterVolumeSpecName: "inventory") pod "a1d1afc2-f4df-4102-9be5-a39b40b9ee65" (UID: "a1d1afc2-f4df-4102-9be5-a39b40b9ee65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.329365 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a1d1afc2-f4df-4102-9be5-a39b40b9ee65" (UID: "a1d1afc2-f4df-4102-9be5-a39b40b9ee65"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.353664 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a1d1afc2-f4df-4102-9be5-a39b40b9ee65" (UID: "a1d1afc2-f4df-4102-9be5-a39b40b9ee65"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.390115 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.390160 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vghdf\" (UniqueName: \"kubernetes.io/projected/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-kube-api-access-vghdf\") on node \"crc\" DevicePath \"\"" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.390174 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.390188 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.390201 4958 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a1d1afc2-f4df-4102-9be5-a39b40b9ee65-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.680740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" event={"ID":"a1d1afc2-f4df-4102-9be5-a39b40b9ee65","Type":"ContainerDied","Data":"b484fb143cf8f8a7982b188b0bd68dfd9e86e3e9d652f9aed51ce67de472352c"} Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.680788 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b484fb143cf8f8a7982b188b0bd68dfd9e86e3e9d652f9aed51ce67de472352c" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.680817 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-fxgb2" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.802758 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-529p8"] Oct 08 08:49:20 crc kubenswrapper[4958]: E1008 08:49:20.803698 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d1afc2-f4df-4102-9be5-a39b40b9ee65" containerName="ovn-openstack-openstack-cell1" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.803728 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d1afc2-f4df-4102-9be5-a39b40b9ee65" containerName="ovn-openstack-openstack-cell1" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.804027 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d1afc2-f4df-4102-9be5-a39b40b9ee65" containerName="ovn-openstack-openstack-cell1" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.804997 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.810424 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.810657 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.810733 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.811059 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.811078 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.811088 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.826181 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-529p8"] Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.903333 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.903384 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.903426 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.903878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.904026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:20 crc kubenswrapper[4958]: I1008 08:49:20.904113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ch56\" (UniqueName: \"kubernetes.io/projected/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-kube-api-access-8ch56\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.005587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.005640 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.005666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.005751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.005787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.005813 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ch56\" (UniqueName: \"kubernetes.io/projected/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-kube-api-access-8ch56\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.010338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.010477 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.010671 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.023082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.026864 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.041130 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ch56\" (UniqueName: \"kubernetes.io/projected/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-kube-api-access-8ch56\") pod \"neutron-metadata-openstack-openstack-cell1-529p8\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.140336 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:49:21 crc kubenswrapper[4958]: I1008 08:49:21.803480 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-529p8"] Oct 08 08:49:21 crc kubenswrapper[4958]: W1008 08:49:21.805290 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod241e113b_bf3e_4beb_8ee1_ea9a67dd1ebb.slice/crio-34144c1ae16a858d794092cd1732a31f69b68834110bca5ade3060d70e4d79dc WatchSource:0}: Error finding container 34144c1ae16a858d794092cd1732a31f69b68834110bca5ade3060d70e4d79dc: Status 404 returned error can't find the container with id 34144c1ae16a858d794092cd1732a31f69b68834110bca5ade3060d70e4d79dc Oct 08 08:49:22 crc kubenswrapper[4958]: I1008 08:49:22.704415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" event={"ID":"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb","Type":"ContainerStarted","Data":"34144c1ae16a858d794092cd1732a31f69b68834110bca5ade3060d70e4d79dc"} Oct 08 08:49:23 crc kubenswrapper[4958]: I1008 08:49:23.722211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" event={"ID":"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb","Type":"ContainerStarted","Data":"26386066b409bc1d7f1d7863651079493bdcd60bef12a9d27d04defb62d7215c"} Oct 08 08:49:23 crc kubenswrapper[4958]: I1008 08:49:23.750190 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" podStartSLOduration=2.440777626 podStartE2EDuration="3.750170776s" podCreationTimestamp="2025-10-08 08:49:20 +0000 UTC" firstStartedPulling="2025-10-08 08:49:21.807407318 +0000 UTC m=+8104.937099919" lastFinishedPulling="2025-10-08 08:49:23.116800428 +0000 UTC m=+8106.246493069" observedRunningTime="2025-10-08 08:49:23.746770903 +0000 UTC m=+8106.876463544" watchObservedRunningTime="2025-10-08 08:49:23.750170776 +0000 UTC m=+8106.879863377" Oct 08 08:49:36 crc kubenswrapper[4958]: I1008 08:49:36.844730 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:49:36 crc kubenswrapper[4958]: I1008 08:49:36.845813 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:50:06 crc kubenswrapper[4958]: I1008 08:50:06.845652 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:50:06 crc kubenswrapper[4958]: I1008 08:50:06.846378 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:50:20 crc kubenswrapper[4958]: I1008 08:50:20.484987 4958 generic.go:334] "Generic (PLEG): container finished" podID="241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" containerID="26386066b409bc1d7f1d7863651079493bdcd60bef12a9d27d04defb62d7215c" exitCode=0 Oct 08 08:50:20 crc kubenswrapper[4958]: I1008 08:50:20.485137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" event={"ID":"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb","Type":"ContainerDied","Data":"26386066b409bc1d7f1d7863651079493bdcd60bef12a9d27d04defb62d7215c"} Oct 08 08:50:21 crc kubenswrapper[4958]: I1008 08:50:21.985590 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.098576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-metadata-combined-ca-bundle\") pod \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.098720 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-inventory\") pod \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.098780 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-nova-metadata-neutron-config-0\") pod \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.098861 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.098990 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ch56\" (UniqueName: \"kubernetes.io/projected/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-kube-api-access-8ch56\") pod \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.099017 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-ssh-key\") pod \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\" (UID: \"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb\") " Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.104257 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-kube-api-access-8ch56" (OuterVolumeSpecName: "kube-api-access-8ch56") pod "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" (UID: "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb"). InnerVolumeSpecName "kube-api-access-8ch56". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.104775 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" (UID: "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.128427 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-inventory" (OuterVolumeSpecName: "inventory") pod "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" (UID: "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.129276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" (UID: "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.135827 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" (UID: "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.137457 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" (UID: "241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.204443 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.204490 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.204506 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ch56\" (UniqueName: \"kubernetes.io/projected/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-kube-api-access-8ch56\") on node \"crc\" DevicePath \"\"" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.204517 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.204641 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.204653 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.512295 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" event={"ID":"241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb","Type":"ContainerDied","Data":"34144c1ae16a858d794092cd1732a31f69b68834110bca5ade3060d70e4d79dc"} Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.512553 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34144c1ae16a858d794092cd1732a31f69b68834110bca5ade3060d70e4d79dc" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.512560 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-529p8" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.613237 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-pndgl"] Oct 08 08:50:22 crc kubenswrapper[4958]: E1008 08:50:22.613897 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" containerName="neutron-metadata-openstack-openstack-cell1" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.613928 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" containerName="neutron-metadata-openstack-openstack-cell1" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.614264 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb" containerName="neutron-metadata-openstack-openstack-cell1" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.615373 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.620244 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.620424 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.620545 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.620659 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.620761 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.639385 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-pndgl"] Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.721467 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.721681 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwjm\" (UniqueName: \"kubernetes.io/projected/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-kube-api-access-rkwjm\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.721739 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-ssh-key\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.721813 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.721900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-inventory\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.824017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkwjm\" (UniqueName: \"kubernetes.io/projected/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-kube-api-access-rkwjm\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.824119 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-ssh-key\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.824217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.824334 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-inventory\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.824422 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.830292 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-inventory\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.831013 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-ssh-key\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.832085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.835377 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.848815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkwjm\" (UniqueName: \"kubernetes.io/projected/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-kube-api-access-rkwjm\") pod \"libvirt-openstack-openstack-cell1-pndgl\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:22 crc kubenswrapper[4958]: I1008 08:50:22.991214 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:50:23 crc kubenswrapper[4958]: I1008 08:50:23.422566 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-pndgl"] Oct 08 08:50:23 crc kubenswrapper[4958]: W1008 08:50:23.424409 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfde2540a_73ef_4a3b_8efd_b7b6331fa25b.slice/crio-a18dae8d5dd8c67b79fa8773df8ab3e989a90b231572d9f7720d314a547e6450 WatchSource:0}: Error finding container a18dae8d5dd8c67b79fa8773df8ab3e989a90b231572d9f7720d314a547e6450: Status 404 returned error can't find the container with id a18dae8d5dd8c67b79fa8773df8ab3e989a90b231572d9f7720d314a547e6450 Oct 08 08:50:23 crc kubenswrapper[4958]: I1008 08:50:23.426717 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:50:23 crc kubenswrapper[4958]: I1008 08:50:23.522780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" event={"ID":"fde2540a-73ef-4a3b-8efd-b7b6331fa25b","Type":"ContainerStarted","Data":"a18dae8d5dd8c67b79fa8773df8ab3e989a90b231572d9f7720d314a547e6450"} Oct 08 08:50:24 crc kubenswrapper[4958]: I1008 08:50:24.537826 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" event={"ID":"fde2540a-73ef-4a3b-8efd-b7b6331fa25b","Type":"ContainerStarted","Data":"dc482fb8ed29bcabd4b683c6cabf14266fc20b5fa2895835ebd79171e9ac84f4"} Oct 08 08:50:24 crc kubenswrapper[4958]: I1008 08:50:24.572142 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" podStartSLOduration=2.019122145 podStartE2EDuration="2.572116928s" podCreationTimestamp="2025-10-08 08:50:22 +0000 UTC" firstStartedPulling="2025-10-08 08:50:23.426502419 +0000 UTC m=+8166.556195020" lastFinishedPulling="2025-10-08 08:50:23.979497202 +0000 UTC m=+8167.109189803" observedRunningTime="2025-10-08 08:50:24.569443635 +0000 UTC m=+8167.699136246" watchObservedRunningTime="2025-10-08 08:50:24.572116928 +0000 UTC m=+8167.701809569" Oct 08 08:50:36 crc kubenswrapper[4958]: I1008 08:50:36.845031 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:50:36 crc kubenswrapper[4958]: I1008 08:50:36.846969 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:50:36 crc kubenswrapper[4958]: I1008 08:50:36.847138 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:50:36 crc kubenswrapper[4958]: I1008 08:50:36.848277 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:50:36 crc kubenswrapper[4958]: I1008 08:50:36.848488 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" gracePeriod=600 Oct 08 08:50:36 crc kubenswrapper[4958]: E1008 08:50:36.974714 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:50:37 crc kubenswrapper[4958]: I1008 08:50:37.718665 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" exitCode=0 Oct 08 08:50:37 crc kubenswrapper[4958]: I1008 08:50:37.718999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd"} Oct 08 08:50:37 crc kubenswrapper[4958]: I1008 08:50:37.719056 4958 scope.go:117] "RemoveContainer" containerID="9792847903de8882201e5c6f9c09f7860e8eac25dd2466f7b851b233acc9c4e0" Oct 08 08:50:37 crc kubenswrapper[4958]: I1008 08:50:37.721915 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:50:37 crc kubenswrapper[4958]: E1008 08:50:37.723897 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:50:48 crc kubenswrapper[4958]: I1008 08:50:48.576483 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:50:48 crc kubenswrapper[4958]: E1008 08:50:48.577397 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:51:03 crc kubenswrapper[4958]: I1008 08:51:03.578469 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:51:03 crc kubenswrapper[4958]: E1008 08:51:03.579217 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:51:17 crc kubenswrapper[4958]: I1008 08:51:17.583688 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:51:17 crc kubenswrapper[4958]: E1008 08:51:17.584427 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:51:28 crc kubenswrapper[4958]: I1008 08:51:28.577790 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:51:28 crc kubenswrapper[4958]: E1008 08:51:28.579164 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:51:39 crc kubenswrapper[4958]: I1008 08:51:39.577179 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:51:39 crc kubenswrapper[4958]: E1008 08:51:39.578534 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:51:51 crc kubenswrapper[4958]: I1008 08:51:51.579449 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:51:51 crc kubenswrapper[4958]: E1008 08:51:51.580153 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:52:04 crc kubenswrapper[4958]: I1008 08:52:04.576979 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:52:04 crc kubenswrapper[4958]: E1008 08:52:04.577993 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:52:15 crc kubenswrapper[4958]: I1008 08:52:15.577780 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:52:15 crc kubenswrapper[4958]: E1008 08:52:15.578961 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:52:27 crc kubenswrapper[4958]: I1008 08:52:27.585596 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:52:27 crc kubenswrapper[4958]: E1008 08:52:27.586759 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.540668 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7kggv"] Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.545704 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.556862 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kggv"] Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.630062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjnb\" (UniqueName: \"kubernetes.io/projected/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-kube-api-access-tsjnb\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.630134 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-catalog-content\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.630888 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-utilities\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.733282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-utilities\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.733384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsjnb\" (UniqueName: \"kubernetes.io/projected/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-kube-api-access-tsjnb\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.733461 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-catalog-content\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.733865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-utilities\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.733865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-catalog-content\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.766703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsjnb\" (UniqueName: \"kubernetes.io/projected/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-kube-api-access-tsjnb\") pod \"community-operators-7kggv\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:34 crc kubenswrapper[4958]: I1008 08:52:34.876886 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:35 crc kubenswrapper[4958]: I1008 08:52:35.449099 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7kggv"] Oct 08 08:52:36 crc kubenswrapper[4958]: I1008 08:52:36.173239 4958 generic.go:334] "Generic (PLEG): container finished" podID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerID="b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557" exitCode=0 Oct 08 08:52:36 crc kubenswrapper[4958]: I1008 08:52:36.173362 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerDied","Data":"b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557"} Oct 08 08:52:36 crc kubenswrapper[4958]: I1008 08:52:36.175798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerStarted","Data":"d530dafaba1185958b17afd80e39be07792e78bbe6178e8a6bd1de995a5aca23"} Oct 08 08:52:38 crc kubenswrapper[4958]: I1008 08:52:38.197987 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerStarted","Data":"281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f"} Oct 08 08:52:39 crc kubenswrapper[4958]: I1008 08:52:39.217667 4958 generic.go:334] "Generic (PLEG): container finished" podID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerID="281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f" exitCode=0 Oct 08 08:52:39 crc kubenswrapper[4958]: I1008 08:52:39.217772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerDied","Data":"281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f"} Oct 08 08:52:39 crc kubenswrapper[4958]: I1008 08:52:39.577709 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:52:39 crc kubenswrapper[4958]: E1008 08:52:39.578089 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:52:40 crc kubenswrapper[4958]: I1008 08:52:40.229747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerStarted","Data":"662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535"} Oct 08 08:52:40 crc kubenswrapper[4958]: I1008 08:52:40.275767 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7kggv" podStartSLOduration=2.796090656 podStartE2EDuration="6.275742753s" podCreationTimestamp="2025-10-08 08:52:34 +0000 UTC" firstStartedPulling="2025-10-08 08:52:36.17527805 +0000 UTC m=+8299.304970651" lastFinishedPulling="2025-10-08 08:52:39.654930137 +0000 UTC m=+8302.784622748" observedRunningTime="2025-10-08 08:52:40.256489187 +0000 UTC m=+8303.386181878" watchObservedRunningTime="2025-10-08 08:52:40.275742753 +0000 UTC m=+8303.405435394" Oct 08 08:52:44 crc kubenswrapper[4958]: I1008 08:52:44.877891 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:44 crc kubenswrapper[4958]: I1008 08:52:44.878439 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:44 crc kubenswrapper[4958]: I1008 08:52:44.940454 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:45 crc kubenswrapper[4958]: I1008 08:52:45.326488 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:45 crc kubenswrapper[4958]: I1008 08:52:45.380307 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kggv"] Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.307723 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7kggv" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="registry-server" containerID="cri-o://662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535" gracePeriod=2 Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.785400 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.940260 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsjnb\" (UniqueName: \"kubernetes.io/projected/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-kube-api-access-tsjnb\") pod \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.940538 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-utilities\") pod \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.940797 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-catalog-content\") pod \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\" (UID: \"63daf2fc-8f3b-41a9-bd80-eb920b8988b7\") " Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.941496 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-utilities" (OuterVolumeSpecName: "utilities") pod "63daf2fc-8f3b-41a9-bd80-eb920b8988b7" (UID: "63daf2fc-8f3b-41a9-bd80-eb920b8988b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:52:47 crc kubenswrapper[4958]: I1008 08:52:47.945621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-kube-api-access-tsjnb" (OuterVolumeSpecName: "kube-api-access-tsjnb") pod "63daf2fc-8f3b-41a9-bd80-eb920b8988b7" (UID: "63daf2fc-8f3b-41a9-bd80-eb920b8988b7"). InnerVolumeSpecName "kube-api-access-tsjnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.044297 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsjnb\" (UniqueName: \"kubernetes.io/projected/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-kube-api-access-tsjnb\") on node \"crc\" DevicePath \"\"" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.044345 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.309363 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63daf2fc-8f3b-41a9-bd80-eb920b8988b7" (UID: "63daf2fc-8f3b-41a9-bd80-eb920b8988b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.320759 4958 generic.go:334] "Generic (PLEG): container finished" podID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerID="662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535" exitCode=0 Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.320803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerDied","Data":"662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535"} Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.320822 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7kggv" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.320838 4958 scope.go:117] "RemoveContainer" containerID="662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.320828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7kggv" event={"ID":"63daf2fc-8f3b-41a9-bd80-eb920b8988b7","Type":"ContainerDied","Data":"d530dafaba1185958b17afd80e39be07792e78bbe6178e8a6bd1de995a5aca23"} Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.350497 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daf2fc-8f3b-41a9-bd80-eb920b8988b7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.350659 4958 scope.go:117] "RemoveContainer" containerID="281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.377641 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7kggv"] Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.411810 4958 scope.go:117] "RemoveContainer" containerID="b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.416303 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7kggv"] Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.451289 4958 scope.go:117] "RemoveContainer" containerID="662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535" Oct 08 08:52:48 crc kubenswrapper[4958]: E1008 08:52:48.451648 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535\": container with ID starting with 662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535 not found: ID does not exist" containerID="662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.451683 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535"} err="failed to get container status \"662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535\": rpc error: code = NotFound desc = could not find container \"662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535\": container with ID starting with 662e104bf4c5d3b2e14bf7eac056f8410581e6f720bbf62f142c30def28ec535 not found: ID does not exist" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.451707 4958 scope.go:117] "RemoveContainer" containerID="281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f" Oct 08 08:52:48 crc kubenswrapper[4958]: E1008 08:52:48.452148 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f\": container with ID starting with 281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f not found: ID does not exist" containerID="281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.452184 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f"} err="failed to get container status \"281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f\": rpc error: code = NotFound desc = could not find container \"281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f\": container with ID starting with 281e260dd4b0ea607b08441c2c167f915cb393adbc36143302a37e16c1ae7f6f not found: ID does not exist" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.452199 4958 scope.go:117] "RemoveContainer" containerID="b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557" Oct 08 08:52:48 crc kubenswrapper[4958]: E1008 08:52:48.452520 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557\": container with ID starting with b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557 not found: ID does not exist" containerID="b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557" Oct 08 08:52:48 crc kubenswrapper[4958]: I1008 08:52:48.452648 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557"} err="failed to get container status \"b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557\": rpc error: code = NotFound desc = could not find container \"b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557\": container with ID starting with b16d6544e5785bd0144ff379774b22a0b7d4cfa3b01be4d203712c190818b557 not found: ID does not exist" Oct 08 08:52:49 crc kubenswrapper[4958]: I1008 08:52:49.595599 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" path="/var/lib/kubelet/pods/63daf2fc-8f3b-41a9-bd80-eb920b8988b7/volumes" Oct 08 08:52:54 crc kubenswrapper[4958]: I1008 08:52:54.577729 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:52:54 crc kubenswrapper[4958]: E1008 08:52:54.578858 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:53:08 crc kubenswrapper[4958]: I1008 08:53:08.577503 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:53:08 crc kubenswrapper[4958]: E1008 08:53:08.578354 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:53:20 crc kubenswrapper[4958]: I1008 08:53:20.577208 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:53:20 crc kubenswrapper[4958]: E1008 08:53:20.578349 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:53:35 crc kubenswrapper[4958]: I1008 08:53:35.577023 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:53:35 crc kubenswrapper[4958]: E1008 08:53:35.580100 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:53:48 crc kubenswrapper[4958]: I1008 08:53:48.577179 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:53:48 crc kubenswrapper[4958]: E1008 08:53:48.578885 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:54:00 crc kubenswrapper[4958]: I1008 08:54:00.576588 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:54:00 crc kubenswrapper[4958]: E1008 08:54:00.578632 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:54:13 crc kubenswrapper[4958]: I1008 08:54:13.577286 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:54:13 crc kubenswrapper[4958]: E1008 08:54:13.578410 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:54:27 crc kubenswrapper[4958]: I1008 08:54:27.589066 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:54:27 crc kubenswrapper[4958]: E1008 08:54:27.590034 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:54:40 crc kubenswrapper[4958]: I1008 08:54:40.579409 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:54:40 crc kubenswrapper[4958]: E1008 08:54:40.584296 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:54:52 crc kubenswrapper[4958]: I1008 08:54:52.576658 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:54:52 crc kubenswrapper[4958]: E1008 08:54:52.577889 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:55:07 crc kubenswrapper[4958]: I1008 08:55:07.591084 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:55:07 crc kubenswrapper[4958]: E1008 08:55:07.591899 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:55:17 crc kubenswrapper[4958]: I1008 08:55:17.227889 4958 generic.go:334] "Generic (PLEG): container finished" podID="fde2540a-73ef-4a3b-8efd-b7b6331fa25b" containerID="dc482fb8ed29bcabd4b683c6cabf14266fc20b5fa2895835ebd79171e9ac84f4" exitCode=0 Oct 08 08:55:17 crc kubenswrapper[4958]: I1008 08:55:17.228024 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" event={"ID":"fde2540a-73ef-4a3b-8efd-b7b6331fa25b","Type":"ContainerDied","Data":"dc482fb8ed29bcabd4b683c6cabf14266fc20b5fa2895835ebd79171e9ac84f4"} Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.581864 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:55:18 crc kubenswrapper[4958]: E1008 08:55:18.582480 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.757895 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.930584 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-inventory\") pod \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.930726 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-secret-0\") pod \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.930773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkwjm\" (UniqueName: \"kubernetes.io/projected/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-kube-api-access-rkwjm\") pod \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.930812 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-combined-ca-bundle\") pod \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.930992 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-ssh-key\") pod \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\" (UID: \"fde2540a-73ef-4a3b-8efd-b7b6331fa25b\") " Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.936899 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-kube-api-access-rkwjm" (OuterVolumeSpecName: "kube-api-access-rkwjm") pod "fde2540a-73ef-4a3b-8efd-b7b6331fa25b" (UID: "fde2540a-73ef-4a3b-8efd-b7b6331fa25b"). InnerVolumeSpecName "kube-api-access-rkwjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.938191 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fde2540a-73ef-4a3b-8efd-b7b6331fa25b" (UID: "fde2540a-73ef-4a3b-8efd-b7b6331fa25b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.963657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-inventory" (OuterVolumeSpecName: "inventory") pod "fde2540a-73ef-4a3b-8efd-b7b6331fa25b" (UID: "fde2540a-73ef-4a3b-8efd-b7b6331fa25b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.991128 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "fde2540a-73ef-4a3b-8efd-b7b6331fa25b" (UID: "fde2540a-73ef-4a3b-8efd-b7b6331fa25b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:55:18 crc kubenswrapper[4958]: I1008 08:55:18.993196 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fde2540a-73ef-4a3b-8efd-b7b6331fa25b" (UID: "fde2540a-73ef-4a3b-8efd-b7b6331fa25b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.033768 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.034079 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.034162 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkwjm\" (UniqueName: \"kubernetes.io/projected/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-kube-api-access-rkwjm\") on node \"crc\" DevicePath \"\"" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.034220 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.034275 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fde2540a-73ef-4a3b-8efd-b7b6331fa25b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.248048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" event={"ID":"fde2540a-73ef-4a3b-8efd-b7b6331fa25b","Type":"ContainerDied","Data":"a18dae8d5dd8c67b79fa8773df8ab3e989a90b231572d9f7720d314a547e6450"} Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.248104 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a18dae8d5dd8c67b79fa8773df8ab3e989a90b231572d9f7720d314a547e6450" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.248091 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-pndgl" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.386580 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-7qrs9"] Oct 08 08:55:19 crc kubenswrapper[4958]: E1008 08:55:19.387244 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="extract-content" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.387312 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="extract-content" Oct 08 08:55:19 crc kubenswrapper[4958]: E1008 08:55:19.387436 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="extract-utilities" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.387500 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="extract-utilities" Oct 08 08:55:19 crc kubenswrapper[4958]: E1008 08:55:19.387569 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde2540a-73ef-4a3b-8efd-b7b6331fa25b" containerName="libvirt-openstack-openstack-cell1" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.387620 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde2540a-73ef-4a3b-8efd-b7b6331fa25b" containerName="libvirt-openstack-openstack-cell1" Oct 08 08:55:19 crc kubenswrapper[4958]: E1008 08:55:19.387679 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="registry-server" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.387730 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="registry-server" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.388011 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="63daf2fc-8f3b-41a9-bd80-eb920b8988b7" containerName="registry-server" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.388094 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde2540a-73ef-4a3b-8efd-b7b6331fa25b" containerName="libvirt-openstack-openstack-cell1" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.388790 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.394409 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.394472 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.394617 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.394912 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.394922 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.395163 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.398520 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.407705 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-7qrs9"] Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.456572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559239 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559353 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-inventory\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559387 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ksvt\" (UniqueName: \"kubernetes.io/projected/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-kube-api-access-7ksvt\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559431 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559476 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559612 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.559703 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.560815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661356 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661427 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661468 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661517 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-inventory\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ksvt\" (UniqueName: \"kubernetes.io/projected/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-kube-api-access-7ksvt\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.661578 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.665372 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.667998 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.668101 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.670215 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.680358 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.680366 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.681745 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-inventory\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.685208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ksvt\" (UniqueName: \"kubernetes.io/projected/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-kube-api-access-7ksvt\") pod \"nova-cell1-openstack-openstack-cell1-7qrs9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:19 crc kubenswrapper[4958]: I1008 08:55:19.756812 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:55:20 crc kubenswrapper[4958]: I1008 08:55:20.345475 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-7qrs9"] Oct 08 08:55:20 crc kubenswrapper[4958]: W1008 08:55:20.351137 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6698d0f1_ad89_4315_9970_fbe6b2b9a7e9.slice/crio-14c9f9648787d82e3fd79686d855f6dd187cfe6a484d9cc9150b75053b55e311 WatchSource:0}: Error finding container 14c9f9648787d82e3fd79686d855f6dd187cfe6a484d9cc9150b75053b55e311: Status 404 returned error can't find the container with id 14c9f9648787d82e3fd79686d855f6dd187cfe6a484d9cc9150b75053b55e311 Oct 08 08:55:21 crc kubenswrapper[4958]: I1008 08:55:21.272374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" event={"ID":"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9","Type":"ContainerStarted","Data":"14c9f9648787d82e3fd79686d855f6dd187cfe6a484d9cc9150b75053b55e311"} Oct 08 08:55:22 crc kubenswrapper[4958]: I1008 08:55:22.296193 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" event={"ID":"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9","Type":"ContainerStarted","Data":"568f50dc2ccf06b5dbc9b295e014a7fc592f71216abbdde90f8dceeee6db8d23"} Oct 08 08:55:22 crc kubenswrapper[4958]: I1008 08:55:22.321124 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" podStartSLOduration=2.655633715 podStartE2EDuration="3.321100149s" podCreationTimestamp="2025-10-08 08:55:19 +0000 UTC" firstStartedPulling="2025-10-08 08:55:20.356500895 +0000 UTC m=+8463.486193496" lastFinishedPulling="2025-10-08 08:55:21.021967319 +0000 UTC m=+8464.151659930" observedRunningTime="2025-10-08 08:55:22.316652588 +0000 UTC m=+8465.446345229" watchObservedRunningTime="2025-10-08 08:55:22.321100149 +0000 UTC m=+8465.450792770" Oct 08 08:55:31 crc kubenswrapper[4958]: I1008 08:55:31.576907 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:55:31 crc kubenswrapper[4958]: E1008 08:55:31.578036 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 08:55:45 crc kubenswrapper[4958]: I1008 08:55:45.576760 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:55:46 crc kubenswrapper[4958]: I1008 08:55:46.687901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"c096c5b6acc70b4eb4a1c3d42857165f0cb09d28f320bf0fd4b6ce81f76cc35e"} Oct 08 08:57:30 crc kubenswrapper[4958]: I1008 08:57:30.914635 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wc56k"] Oct 08 08:57:30 crc kubenswrapper[4958]: I1008 08:57:30.918371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:30 crc kubenswrapper[4958]: I1008 08:57:30.953987 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wc56k"] Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.101868 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtf5\" (UniqueName: \"kubernetes.io/projected/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-kube-api-access-gvtf5\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.101912 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-catalog-content\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.102094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-utilities\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.204247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-utilities\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.204426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvtf5\" (UniqueName: \"kubernetes.io/projected/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-kube-api-access-gvtf5\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.204459 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-catalog-content\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.204705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-utilities\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.205329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-catalog-content\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.234563 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvtf5\" (UniqueName: \"kubernetes.io/projected/1264ef36-ebe0-4e83-a46a-f43616fdc1c4-kube-api-access-gvtf5\") pod \"certified-operators-wc56k\" (UID: \"1264ef36-ebe0-4e83-a46a-f43616fdc1c4\") " pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.246751 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:31 crc kubenswrapper[4958]: I1008 08:57:31.766295 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wc56k"] Oct 08 08:57:32 crc kubenswrapper[4958]: I1008 08:57:32.110016 4958 generic.go:334] "Generic (PLEG): container finished" podID="1264ef36-ebe0-4e83-a46a-f43616fdc1c4" containerID="953c80f5884c27993a7df1e5f3c3ef607c6f6975654b82078d117d917e02f4d7" exitCode=0 Oct 08 08:57:32 crc kubenswrapper[4958]: I1008 08:57:32.110059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc56k" event={"ID":"1264ef36-ebe0-4e83-a46a-f43616fdc1c4","Type":"ContainerDied","Data":"953c80f5884c27993a7df1e5f3c3ef607c6f6975654b82078d117d917e02f4d7"} Oct 08 08:57:32 crc kubenswrapper[4958]: I1008 08:57:32.110084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc56k" event={"ID":"1264ef36-ebe0-4e83-a46a-f43616fdc1c4","Type":"ContainerStarted","Data":"cafc1b33da277aaea4a427748ff9316dee44ed105c46f7f29ebfb2b55e24a321"} Oct 08 08:57:32 crc kubenswrapper[4958]: I1008 08:57:32.112654 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.507020 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qzmxl"] Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.509912 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.524899 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzmxl"] Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.527381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-utilities\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.527432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwvwz\" (UniqueName: \"kubernetes.io/projected/be45f3c7-1eef-4140-9d50-968dee9bd49f-kube-api-access-hwvwz\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.527469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-catalog-content\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.628617 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-utilities\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.628851 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwvwz\" (UniqueName: \"kubernetes.io/projected/be45f3c7-1eef-4140-9d50-968dee9bd49f-kube-api-access-hwvwz\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.628923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-catalog-content\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.629049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-utilities\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.629364 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-catalog-content\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.648650 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwvwz\" (UniqueName: \"kubernetes.io/projected/be45f3c7-1eef-4140-9d50-968dee9bd49f-kube-api-access-hwvwz\") pod \"redhat-operators-qzmxl\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:35 crc kubenswrapper[4958]: I1008 08:57:35.826962 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:37 crc kubenswrapper[4958]: I1008 08:57:37.431878 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzmxl"] Oct 08 08:57:37 crc kubenswrapper[4958]: W1008 08:57:37.441486 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe45f3c7_1eef_4140_9d50_968dee9bd49f.slice/crio-26594d6e8ac40d5ac55b788b8777638ec3bb22b78ec881c136c07fce929aa7c7 WatchSource:0}: Error finding container 26594d6e8ac40d5ac55b788b8777638ec3bb22b78ec881c136c07fce929aa7c7: Status 404 returned error can't find the container with id 26594d6e8ac40d5ac55b788b8777638ec3bb22b78ec881c136c07fce929aa7c7 Oct 08 08:57:38 crc kubenswrapper[4958]: I1008 08:57:38.184277 4958 generic.go:334] "Generic (PLEG): container finished" podID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerID="2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd" exitCode=0 Oct 08 08:57:38 crc kubenswrapper[4958]: I1008 08:57:38.184492 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerDied","Data":"2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd"} Oct 08 08:57:38 crc kubenswrapper[4958]: I1008 08:57:38.184710 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerStarted","Data":"26594d6e8ac40d5ac55b788b8777638ec3bb22b78ec881c136c07fce929aa7c7"} Oct 08 08:57:38 crc kubenswrapper[4958]: I1008 08:57:38.187915 4958 generic.go:334] "Generic (PLEG): container finished" podID="1264ef36-ebe0-4e83-a46a-f43616fdc1c4" containerID="cb0c08b33c842e61c7e5827905c0c27b5c5f142e1cf5db5af46bf40d35559f21" exitCode=0 Oct 08 08:57:38 crc kubenswrapper[4958]: I1008 08:57:38.188001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc56k" event={"ID":"1264ef36-ebe0-4e83-a46a-f43616fdc1c4","Type":"ContainerDied","Data":"cb0c08b33c842e61c7e5827905c0c27b5c5f142e1cf5db5af46bf40d35559f21"} Oct 08 08:57:39 crc kubenswrapper[4958]: I1008 08:57:39.203364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerStarted","Data":"91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c"} Oct 08 08:57:39 crc kubenswrapper[4958]: I1008 08:57:39.207284 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc56k" event={"ID":"1264ef36-ebe0-4e83-a46a-f43616fdc1c4","Type":"ContainerStarted","Data":"bdf5aa777b060e20a43a41bf753909c51e31e2510938dab0b541948eb41bd8fd"} Oct 08 08:57:39 crc kubenswrapper[4958]: I1008 08:57:39.257368 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wc56k" podStartSLOduration=2.559241333 podStartE2EDuration="9.257347469s" podCreationTimestamp="2025-10-08 08:57:30 +0000 UTC" firstStartedPulling="2025-10-08 08:57:32.112438938 +0000 UTC m=+8595.242131539" lastFinishedPulling="2025-10-08 08:57:38.810545034 +0000 UTC m=+8601.940237675" observedRunningTime="2025-10-08 08:57:39.247535731 +0000 UTC m=+8602.377228342" watchObservedRunningTime="2025-10-08 08:57:39.257347469 +0000 UTC m=+8602.387040070" Oct 08 08:57:41 crc kubenswrapper[4958]: I1008 08:57:41.247162 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:41 crc kubenswrapper[4958]: I1008 08:57:41.247827 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:57:42 crc kubenswrapper[4958]: I1008 08:57:42.301486 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wc56k" podUID="1264ef36-ebe0-4e83-a46a-f43616fdc1c4" containerName="registry-server" probeResult="failure" output=< Oct 08 08:57:42 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:57:42 crc kubenswrapper[4958]: > Oct 08 08:57:44 crc kubenswrapper[4958]: I1008 08:57:44.306927 4958 generic.go:334] "Generic (PLEG): container finished" podID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerID="91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c" exitCode=0 Oct 08 08:57:44 crc kubenswrapper[4958]: I1008 08:57:44.307022 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerDied","Data":"91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c"} Oct 08 08:57:45 crc kubenswrapper[4958]: I1008 08:57:45.322482 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerStarted","Data":"de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5"} Oct 08 08:57:45 crc kubenswrapper[4958]: I1008 08:57:45.342540 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qzmxl" podStartSLOduration=3.71555335 podStartE2EDuration="10.342514874s" podCreationTimestamp="2025-10-08 08:57:35 +0000 UTC" firstStartedPulling="2025-10-08 08:57:38.191338242 +0000 UTC m=+8601.321030883" lastFinishedPulling="2025-10-08 08:57:44.818299816 +0000 UTC m=+8607.947992407" observedRunningTime="2025-10-08 08:57:45.34088966 +0000 UTC m=+8608.470582321" watchObservedRunningTime="2025-10-08 08:57:45.342514874 +0000 UTC m=+8608.472207475" Oct 08 08:57:45 crc kubenswrapper[4958]: I1008 08:57:45.827608 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:45 crc kubenswrapper[4958]: I1008 08:57:45.827940 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:57:46 crc kubenswrapper[4958]: I1008 08:57:46.889775 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qzmxl" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" probeResult="failure" output=< Oct 08 08:57:46 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:57:46 crc kubenswrapper[4958]: > Oct 08 08:57:52 crc kubenswrapper[4958]: I1008 08:57:52.328212 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wc56k" podUID="1264ef36-ebe0-4e83-a46a-f43616fdc1c4" containerName="registry-server" probeResult="failure" output=< Oct 08 08:57:52 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:57:52 crc kubenswrapper[4958]: > Oct 08 08:57:56 crc kubenswrapper[4958]: I1008 08:57:56.890511 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qzmxl" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" probeResult="failure" output=< Oct 08 08:57:56 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:57:56 crc kubenswrapper[4958]: > Oct 08 08:58:01 crc kubenswrapper[4958]: I1008 08:58:01.298267 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:58:01 crc kubenswrapper[4958]: I1008 08:58:01.366434 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wc56k" Oct 08 08:58:01 crc kubenswrapper[4958]: I1008 08:58:01.943989 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wc56k"] Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.122727 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shghl"] Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.123013 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-shghl" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="registry-server" containerID="cri-o://02f2f9489a649e1f06825fd5533b4e13ebc168a3a10b1fb79c64ddda79320549" gracePeriod=2 Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.574303 4958 generic.go:334] "Generic (PLEG): container finished" podID="87768529-d604-44d6-b659-24d6464a2076" containerID="02f2f9489a649e1f06825fd5533b4e13ebc168a3a10b1fb79c64ddda79320549" exitCode=0 Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.574710 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shghl" event={"ID":"87768529-d604-44d6-b659-24d6464a2076","Type":"ContainerDied","Data":"02f2f9489a649e1f06825fd5533b4e13ebc168a3a10b1fb79c64ddda79320549"} Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.574783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shghl" event={"ID":"87768529-d604-44d6-b659-24d6464a2076","Type":"ContainerDied","Data":"33d64492d980a803dc1be26689219b443c09e8d5dfda3d9013c337647e635481"} Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.574797 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d64492d980a803dc1be26689219b443c09e8d5dfda3d9013c337647e635481" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.681230 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shghl" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.781661 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-catalog-content\") pod \"87768529-d604-44d6-b659-24d6464a2076\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.781778 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-utilities\") pod \"87768529-d604-44d6-b659-24d6464a2076\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.782103 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85kt8\" (UniqueName: \"kubernetes.io/projected/87768529-d604-44d6-b659-24d6464a2076-kube-api-access-85kt8\") pod \"87768529-d604-44d6-b659-24d6464a2076\" (UID: \"87768529-d604-44d6-b659-24d6464a2076\") " Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.784253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-utilities" (OuterVolumeSpecName: "utilities") pod "87768529-d604-44d6-b659-24d6464a2076" (UID: "87768529-d604-44d6-b659-24d6464a2076"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.805061 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87768529-d604-44d6-b659-24d6464a2076-kube-api-access-85kt8" (OuterVolumeSpecName: "kube-api-access-85kt8") pod "87768529-d604-44d6-b659-24d6464a2076" (UID: "87768529-d604-44d6-b659-24d6464a2076"). InnerVolumeSpecName "kube-api-access-85kt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.851681 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87768529-d604-44d6-b659-24d6464a2076" (UID: "87768529-d604-44d6-b659-24d6464a2076"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.884976 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85kt8\" (UniqueName: \"kubernetes.io/projected/87768529-d604-44d6-b659-24d6464a2076-kube-api-access-85kt8\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.885009 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:02 crc kubenswrapper[4958]: I1008 08:58:02.885019 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87768529-d604-44d6-b659-24d6464a2076-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:03 crc kubenswrapper[4958]: I1008 08:58:03.587087 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shghl" Oct 08 08:58:03 crc kubenswrapper[4958]: I1008 08:58:03.632518 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shghl"] Oct 08 08:58:03 crc kubenswrapper[4958]: I1008 08:58:03.641436 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-shghl"] Oct 08 08:58:05 crc kubenswrapper[4958]: I1008 08:58:05.587034 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87768529-d604-44d6-b659-24d6464a2076" path="/var/lib/kubelet/pods/87768529-d604-44d6-b659-24d6464a2076/volumes" Oct 08 08:58:06 crc kubenswrapper[4958]: I1008 08:58:06.845238 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:58:06 crc kubenswrapper[4958]: I1008 08:58:06.845702 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:58:06 crc kubenswrapper[4958]: I1008 08:58:06.897370 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qzmxl" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" probeResult="failure" output=< Oct 08 08:58:06 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 08:58:06 crc kubenswrapper[4958]: > Oct 08 08:58:15 crc kubenswrapper[4958]: I1008 08:58:15.921296 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:58:16 crc kubenswrapper[4958]: I1008 08:58:16.005465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:58:16 crc kubenswrapper[4958]: I1008 08:58:16.174721 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qzmxl"] Oct 08 08:58:17 crc kubenswrapper[4958]: I1008 08:58:17.789460 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qzmxl" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" containerID="cri-o://de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5" gracePeriod=2 Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.295997 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.463085 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-utilities\") pod \"be45f3c7-1eef-4140-9d50-968dee9bd49f\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.463469 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-catalog-content\") pod \"be45f3c7-1eef-4140-9d50-968dee9bd49f\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.463706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwvwz\" (UniqueName: \"kubernetes.io/projected/be45f3c7-1eef-4140-9d50-968dee9bd49f-kube-api-access-hwvwz\") pod \"be45f3c7-1eef-4140-9d50-968dee9bd49f\" (UID: \"be45f3c7-1eef-4140-9d50-968dee9bd49f\") " Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.464104 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-utilities" (OuterVolumeSpecName: "utilities") pod "be45f3c7-1eef-4140-9d50-968dee9bd49f" (UID: "be45f3c7-1eef-4140-9d50-968dee9bd49f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.464846 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.470775 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be45f3c7-1eef-4140-9d50-968dee9bd49f-kube-api-access-hwvwz" (OuterVolumeSpecName: "kube-api-access-hwvwz") pod "be45f3c7-1eef-4140-9d50-968dee9bd49f" (UID: "be45f3c7-1eef-4140-9d50-968dee9bd49f"). InnerVolumeSpecName "kube-api-access-hwvwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.567671 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwvwz\" (UniqueName: \"kubernetes.io/projected/be45f3c7-1eef-4140-9d50-968dee9bd49f-kube-api-access-hwvwz\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.600692 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be45f3c7-1eef-4140-9d50-968dee9bd49f" (UID: "be45f3c7-1eef-4140-9d50-968dee9bd49f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.670279 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be45f3c7-1eef-4140-9d50-968dee9bd49f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.811182 4958 generic.go:334] "Generic (PLEG): container finished" podID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerID="de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5" exitCode=0 Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.811250 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerDied","Data":"de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5"} Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.811290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzmxl" event={"ID":"be45f3c7-1eef-4140-9d50-968dee9bd49f","Type":"ContainerDied","Data":"26594d6e8ac40d5ac55b788b8777638ec3bb22b78ec881c136c07fce929aa7c7"} Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.811321 4958 scope.go:117] "RemoveContainer" containerID="de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.811360 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzmxl" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.863841 4958 scope.go:117] "RemoveContainer" containerID="91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.872063 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qzmxl"] Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.882729 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qzmxl"] Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.894216 4958 scope.go:117] "RemoveContainer" containerID="2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.947683 4958 scope.go:117] "RemoveContainer" containerID="de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5" Oct 08 08:58:18 crc kubenswrapper[4958]: E1008 08:58:18.948280 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5\": container with ID starting with de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5 not found: ID does not exist" containerID="de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.948315 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5"} err="failed to get container status \"de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5\": rpc error: code = NotFound desc = could not find container \"de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5\": container with ID starting with de02004e97b3df6e927d544816d5a5443a717ba059e2344f59471e2a80dff8b5 not found: ID does not exist" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.948338 4958 scope.go:117] "RemoveContainer" containerID="91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c" Oct 08 08:58:18 crc kubenswrapper[4958]: E1008 08:58:18.948708 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c\": container with ID starting with 91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c not found: ID does not exist" containerID="91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.948763 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c"} err="failed to get container status \"91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c\": rpc error: code = NotFound desc = could not find container \"91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c\": container with ID starting with 91c0e0fd0b0096e9921067613fd0c745855098ac663ca7ff523010b7a59c4b1c not found: ID does not exist" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.948796 4958 scope.go:117] "RemoveContainer" containerID="2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd" Oct 08 08:58:18 crc kubenswrapper[4958]: E1008 08:58:18.949172 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd\": container with ID starting with 2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd not found: ID does not exist" containerID="2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd" Oct 08 08:58:18 crc kubenswrapper[4958]: I1008 08:58:18.949191 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd"} err="failed to get container status \"2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd\": rpc error: code = NotFound desc = could not find container \"2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd\": container with ID starting with 2968dc9370fde70cd24ff14fdc8b0dbd2ab9c94e0f9dffcaec21d62396ed8dbd not found: ID does not exist" Oct 08 08:58:19 crc kubenswrapper[4958]: I1008 08:58:19.595924 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" path="/var/lib/kubelet/pods/be45f3c7-1eef-4140-9d50-968dee9bd49f/volumes" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.416495 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ktm9k"] Oct 08 08:58:30 crc kubenswrapper[4958]: E1008 08:58:30.417809 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="extract-utilities" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.417830 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="extract-utilities" Oct 08 08:58:30 crc kubenswrapper[4958]: E1008 08:58:30.417858 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="extract-content" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.417866 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="extract-content" Oct 08 08:58:30 crc kubenswrapper[4958]: E1008 08:58:30.417886 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="extract-content" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.417895 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="extract-content" Oct 08 08:58:30 crc kubenswrapper[4958]: E1008 08:58:30.417921 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="registry-server" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.417929 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="registry-server" Oct 08 08:58:30 crc kubenswrapper[4958]: E1008 08:58:30.418002 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="extract-utilities" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.418012 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="extract-utilities" Oct 08 08:58:30 crc kubenswrapper[4958]: E1008 08:58:30.418056 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.418065 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.418357 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="87768529-d604-44d6-b659-24d6464a2076" containerName="registry-server" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.418399 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="be45f3c7-1eef-4140-9d50-968dee9bd49f" containerName="registry-server" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.420670 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.433165 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktm9k"] Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.586079 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-catalog-content\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.586421 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984t8\" (UniqueName: \"kubernetes.io/projected/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-kube-api-access-984t8\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.586506 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-utilities\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.688408 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984t8\" (UniqueName: \"kubernetes.io/projected/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-kube-api-access-984t8\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.688823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-utilities\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.689134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-catalog-content\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.689730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-utilities\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.690548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-catalog-content\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.715019 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984t8\" (UniqueName: \"kubernetes.io/projected/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-kube-api-access-984t8\") pod \"redhat-marketplace-ktm9k\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:30 crc kubenswrapper[4958]: I1008 08:58:30.748606 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:31 crc kubenswrapper[4958]: I1008 08:58:31.228240 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktm9k"] Oct 08 08:58:31 crc kubenswrapper[4958]: I1008 08:58:31.983417 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerID="604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8" exitCode=0 Oct 08 08:58:31 crc kubenswrapper[4958]: I1008 08:58:31.983544 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerDied","Data":"604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8"} Oct 08 08:58:31 crc kubenswrapper[4958]: I1008 08:58:31.983909 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerStarted","Data":"fa86fd40782c1559de8883fcb2e07f726aa8244b4311a8f6727ad532de87266a"} Oct 08 08:58:33 crc kubenswrapper[4958]: I1008 08:58:33.006729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerStarted","Data":"f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed"} Oct 08 08:58:34 crc kubenswrapper[4958]: I1008 08:58:34.024398 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerID="f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed" exitCode=0 Oct 08 08:58:34 crc kubenswrapper[4958]: I1008 08:58:34.024463 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerDied","Data":"f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed"} Oct 08 08:58:35 crc kubenswrapper[4958]: I1008 08:58:35.046113 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerStarted","Data":"e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4"} Oct 08 08:58:35 crc kubenswrapper[4958]: I1008 08:58:35.073095 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ktm9k" podStartSLOduration=2.514606242 podStartE2EDuration="5.073075354s" podCreationTimestamp="2025-10-08 08:58:30 +0000 UTC" firstStartedPulling="2025-10-08 08:58:31.985527902 +0000 UTC m=+8655.115220523" lastFinishedPulling="2025-10-08 08:58:34.543997034 +0000 UTC m=+8657.673689635" observedRunningTime="2025-10-08 08:58:35.06258296 +0000 UTC m=+8658.192275571" watchObservedRunningTime="2025-10-08 08:58:35.073075354 +0000 UTC m=+8658.202767965" Oct 08 08:58:36 crc kubenswrapper[4958]: I1008 08:58:36.845580 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:58:36 crc kubenswrapper[4958]: I1008 08:58:36.845871 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:58:40 crc kubenswrapper[4958]: I1008 08:58:40.748985 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:40 crc kubenswrapper[4958]: I1008 08:58:40.749721 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:40 crc kubenswrapper[4958]: I1008 08:58:40.824209 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:41 crc kubenswrapper[4958]: I1008 08:58:41.228982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:41 crc kubenswrapper[4958]: I1008 08:58:41.290636 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktm9k"] Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.183170 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ktm9k" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="registry-server" containerID="cri-o://e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4" gracePeriod=2 Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.802859 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.856297 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-utilities\") pod \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.856502 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-catalog-content\") pod \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.856607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984t8\" (UniqueName: \"kubernetes.io/projected/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-kube-api-access-984t8\") pod \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\" (UID: \"2d792e4c-53ce-4d8e-a1a8-98d3fd875810\") " Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.857896 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-utilities" (OuterVolumeSpecName: "utilities") pod "2d792e4c-53ce-4d8e-a1a8-98d3fd875810" (UID: "2d792e4c-53ce-4d8e-a1a8-98d3fd875810"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.862928 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-kube-api-access-984t8" (OuterVolumeSpecName: "kube-api-access-984t8") pod "2d792e4c-53ce-4d8e-a1a8-98d3fd875810" (UID: "2d792e4c-53ce-4d8e-a1a8-98d3fd875810"). InnerVolumeSpecName "kube-api-access-984t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.876290 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d792e4c-53ce-4d8e-a1a8-98d3fd875810" (UID: "2d792e4c-53ce-4d8e-a1a8-98d3fd875810"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.958346 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.958689 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:43 crc kubenswrapper[4958]: I1008 08:58:43.958701 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-984t8\" (UniqueName: \"kubernetes.io/projected/2d792e4c-53ce-4d8e-a1a8-98d3fd875810-kube-api-access-984t8\") on node \"crc\" DevicePath \"\"" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.201486 4958 generic.go:334] "Generic (PLEG): container finished" podID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerID="e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4" exitCode=0 Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.201565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerDied","Data":"e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4"} Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.201612 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktm9k" event={"ID":"2d792e4c-53ce-4d8e-a1a8-98d3fd875810","Type":"ContainerDied","Data":"fa86fd40782c1559de8883fcb2e07f726aa8244b4311a8f6727ad532de87266a"} Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.201638 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktm9k" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.201645 4958 scope.go:117] "RemoveContainer" containerID="e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.229495 4958 scope.go:117] "RemoveContainer" containerID="f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.270824 4958 scope.go:117] "RemoveContainer" containerID="604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.271064 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktm9k"] Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.290749 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktm9k"] Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.354819 4958 scope.go:117] "RemoveContainer" containerID="e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4" Oct 08 08:58:44 crc kubenswrapper[4958]: E1008 08:58:44.355500 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4\": container with ID starting with e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4 not found: ID does not exist" containerID="e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.355565 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4"} err="failed to get container status \"e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4\": rpc error: code = NotFound desc = could not find container \"e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4\": container with ID starting with e013210a44e8e59b59fe0bf939535d78829f39ca66f7afb73b17b54973a473e4 not found: ID does not exist" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.355602 4958 scope.go:117] "RemoveContainer" containerID="f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed" Oct 08 08:58:44 crc kubenswrapper[4958]: E1008 08:58:44.356251 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed\": container with ID starting with f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed not found: ID does not exist" containerID="f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.356292 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed"} err="failed to get container status \"f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed\": rpc error: code = NotFound desc = could not find container \"f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed\": container with ID starting with f5d2c833b990efe396f4f21e418f409f52079321180b8e9318fa1962ac2ee4ed not found: ID does not exist" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.356320 4958 scope.go:117] "RemoveContainer" containerID="604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8" Oct 08 08:58:44 crc kubenswrapper[4958]: E1008 08:58:44.356746 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8\": container with ID starting with 604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8 not found: ID does not exist" containerID="604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8" Oct 08 08:58:44 crc kubenswrapper[4958]: I1008 08:58:44.356807 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8"} err="failed to get container status \"604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8\": rpc error: code = NotFound desc = could not find container \"604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8\": container with ID starting with 604842172cee9eaa19b2d0edf8f0645969ce85180a92f1146ec0b50713da83a8 not found: ID does not exist" Oct 08 08:58:45 crc kubenswrapper[4958]: I1008 08:58:45.591834 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" path="/var/lib/kubelet/pods/2d792e4c-53ce-4d8e-a1a8-98d3fd875810/volumes" Oct 08 08:58:48 crc kubenswrapper[4958]: I1008 08:58:48.960652 4958 scope.go:117] "RemoveContainer" containerID="63b23bb5bc14e7184d053260545fec042a8c3effa209d4e800c8f7c1a56092ca" Oct 08 08:58:49 crc kubenswrapper[4958]: I1008 08:58:49.001228 4958 scope.go:117] "RemoveContainer" containerID="4d5ad88b23cdad91b979874745420265d6b5ef69a410e76b718e2ba188026c0e" Oct 08 08:58:49 crc kubenswrapper[4958]: I1008 08:58:49.091767 4958 scope.go:117] "RemoveContainer" containerID="02f2f9489a649e1f06825fd5533b4e13ebc168a3a10b1fb79c64ddda79320549" Oct 08 08:59:06 crc kubenswrapper[4958]: I1008 08:59:06.845306 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 08:59:06 crc kubenswrapper[4958]: I1008 08:59:06.846243 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 08:59:06 crc kubenswrapper[4958]: I1008 08:59:06.846318 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 08:59:06 crc kubenswrapper[4958]: I1008 08:59:06.847679 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c096c5b6acc70b4eb4a1c3d42857165f0cb09d28f320bf0fd4b6ce81f76cc35e"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 08:59:06 crc kubenswrapper[4958]: I1008 08:59:06.847784 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://c096c5b6acc70b4eb4a1c3d42857165f0cb09d28f320bf0fd4b6ce81f76cc35e" gracePeriod=600 Oct 08 08:59:07 crc kubenswrapper[4958]: I1008 08:59:07.479383 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="c096c5b6acc70b4eb4a1c3d42857165f0cb09d28f320bf0fd4b6ce81f76cc35e" exitCode=0 Oct 08 08:59:07 crc kubenswrapper[4958]: I1008 08:59:07.479448 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"c096c5b6acc70b4eb4a1c3d42857165f0cb09d28f320bf0fd4b6ce81f76cc35e"} Oct 08 08:59:07 crc kubenswrapper[4958]: I1008 08:59:07.479755 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e"} Oct 08 08:59:07 crc kubenswrapper[4958]: I1008 08:59:07.479783 4958 scope.go:117] "RemoveContainer" containerID="8e2f0609b390ee9748aee28486cb12bc41f4dc419101d6b417f242a566654dbd" Oct 08 08:59:12 crc kubenswrapper[4958]: I1008 08:59:12.546037 4958 generic.go:334] "Generic (PLEG): container finished" podID="6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" containerID="568f50dc2ccf06b5dbc9b295e014a7fc592f71216abbdde90f8dceeee6db8d23" exitCode=0 Oct 08 08:59:12 crc kubenswrapper[4958]: I1008 08:59:12.546115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" event={"ID":"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9","Type":"ContainerDied","Data":"568f50dc2ccf06b5dbc9b295e014a7fc592f71216abbdde90f8dceeee6db8d23"} Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.059290 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.140796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-inventory\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.140844 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-combined-ca-bundle\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.140976 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-1\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.141050 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ksvt\" (UniqueName: \"kubernetes.io/projected/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-kube-api-access-7ksvt\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.141070 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-1\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.141094 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-0\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.141154 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-0\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.141193 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-ssh-key\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.141227 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cells-global-config-0\") pod \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\" (UID: \"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9\") " Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.147494 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.152161 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-kube-api-access-7ksvt" (OuterVolumeSpecName: "kube-api-access-7ksvt") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "kube-api-access-7ksvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.174493 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.186047 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.187000 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.188773 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.189943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.197397 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.201036 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-inventory" (OuterVolumeSpecName: "inventory") pod "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" (UID: "6698d0f1-ad89-4315-9970-fbe6b2b9a7e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245134 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245282 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245408 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245510 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ksvt\" (UniqueName: \"kubernetes.io/projected/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-kube-api-access-7ksvt\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245599 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245680 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245756 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245839 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.245969 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/6698d0f1-ad89-4315-9970-fbe6b2b9a7e9-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.577038 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.577071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7qrs9" event={"ID":"6698d0f1-ad89-4315-9970-fbe6b2b9a7e9","Type":"ContainerDied","Data":"14c9f9648787d82e3fd79686d855f6dd187cfe6a484d9cc9150b75053b55e311"} Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.577688 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c9f9648787d82e3fd79686d855f6dd187cfe6a484d9cc9150b75053b55e311" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.789332 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-5ggmb"] Oct 08 08:59:14 crc kubenswrapper[4958]: E1008 08:59:14.789816 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="extract-content" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.789838 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="extract-content" Oct 08 08:59:14 crc kubenswrapper[4958]: E1008 08:59:14.789868 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="registry-server" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.789877 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="registry-server" Oct 08 08:59:14 crc kubenswrapper[4958]: E1008 08:59:14.789894 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" containerName="nova-cell1-openstack-openstack-cell1" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.789903 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" containerName="nova-cell1-openstack-openstack-cell1" Oct 08 08:59:14 crc kubenswrapper[4958]: E1008 08:59:14.792054 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="extract-utilities" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.792100 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="extract-utilities" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.792535 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6698d0f1-ad89-4315-9970-fbe6b2b9a7e9" containerName="nova-cell1-openstack-openstack-cell1" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.792574 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d792e4c-53ce-4d8e-a1a8-98d3fd875810" containerName="registry-server" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.793369 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.796181 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.796436 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.796592 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.796609 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.797164 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.811399 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-5ggmb"] Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.859443 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-inventory\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.859636 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.859732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtw2\" (UniqueName: \"kubernetes.io/projected/37c235bc-99b8-41ad-a6e5-e735be206363-kube-api-access-4rtw2\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.859893 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.860020 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ssh-key\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.860099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.860271 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.961789 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.961897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.962057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-inventory\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.962222 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.962685 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtw2\" (UniqueName: \"kubernetes.io/projected/37c235bc-99b8-41ad-a6e5-e735be206363-kube-api-access-4rtw2\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.962734 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.962763 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ssh-key\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.966511 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.966940 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.967201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-inventory\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.968262 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ssh-key\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.972579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.978670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:14 crc kubenswrapper[4958]: I1008 08:59:14.980400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtw2\" (UniqueName: \"kubernetes.io/projected/37c235bc-99b8-41ad-a6e5-e735be206363-kube-api-access-4rtw2\") pod \"telemetry-openstack-openstack-cell1-5ggmb\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:15 crc kubenswrapper[4958]: I1008 08:59:15.128761 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 08:59:15 crc kubenswrapper[4958]: I1008 08:59:15.761046 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-5ggmb"] Oct 08 08:59:15 crc kubenswrapper[4958]: W1008 08:59:15.762437 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37c235bc_99b8_41ad_a6e5_e735be206363.slice/crio-47377d195183f804d540b12784de834537029d6e8a81979100769434af81ad86 WatchSource:0}: Error finding container 47377d195183f804d540b12784de834537029d6e8a81979100769434af81ad86: Status 404 returned error can't find the container with id 47377d195183f804d540b12784de834537029d6e8a81979100769434af81ad86 Oct 08 08:59:16 crc kubenswrapper[4958]: I1008 08:59:16.619775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" event={"ID":"37c235bc-99b8-41ad-a6e5-e735be206363","Type":"ContainerStarted","Data":"9626b9c7a9feb1a94e6e84a0e48cfd1c83d253933157537c3f9f8a54ac15e69c"} Oct 08 08:59:16 crc kubenswrapper[4958]: I1008 08:59:16.620276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" event={"ID":"37c235bc-99b8-41ad-a6e5-e735be206363","Type":"ContainerStarted","Data":"47377d195183f804d540b12784de834537029d6e8a81979100769434af81ad86"} Oct 08 08:59:16 crc kubenswrapper[4958]: I1008 08:59:16.648879 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" podStartSLOduration=2.250667881 podStartE2EDuration="2.648828693s" podCreationTimestamp="2025-10-08 08:59:14 +0000 UTC" firstStartedPulling="2025-10-08 08:59:15.765939953 +0000 UTC m=+8698.895632554" lastFinishedPulling="2025-10-08 08:59:16.164100765 +0000 UTC m=+8699.293793366" observedRunningTime="2025-10-08 08:59:16.641775581 +0000 UTC m=+8699.771468202" watchObservedRunningTime="2025-10-08 08:59:16.648828693 +0000 UTC m=+8699.778521314" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.160201 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5"] Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.163152 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.165133 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.165453 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.176297 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5"] Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.207419 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-config-volume\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.207615 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-secret-volume\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.207701 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qhg\" (UniqueName: \"kubernetes.io/projected/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-kube-api-access-g2qhg\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.309734 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-config-volume\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.309868 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-secret-volume\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.309975 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qhg\" (UniqueName: \"kubernetes.io/projected/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-kube-api-access-g2qhg\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.311306 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-config-volume\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.315808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-secret-volume\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.332073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qhg\" (UniqueName: \"kubernetes.io/projected/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-kube-api-access-g2qhg\") pod \"collect-profiles-29331900-pnxp5\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.485886 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:00 crc kubenswrapper[4958]: W1008 09:00:00.960890 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ab68b7_aff0_4762_9e6b_1abb5ea3e0ea.slice/crio-1dc52213ec7115d3fc1a90b848866f89821aa7085f49afb61ad0e5a0cd14628c WatchSource:0}: Error finding container 1dc52213ec7115d3fc1a90b848866f89821aa7085f49afb61ad0e5a0cd14628c: Status 404 returned error can't find the container with id 1dc52213ec7115d3fc1a90b848866f89821aa7085f49afb61ad0e5a0cd14628c Oct 08 09:00:00 crc kubenswrapper[4958]: I1008 09:00:00.961853 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5"] Oct 08 09:00:01 crc kubenswrapper[4958]: I1008 09:00:01.226240 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" event={"ID":"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea","Type":"ContainerStarted","Data":"11bc3a32d6a2e9301509358f9562c0a80ce696c4c0cee5cf291c7b763c403aec"} Oct 08 09:00:01 crc kubenswrapper[4958]: I1008 09:00:01.226279 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" event={"ID":"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea","Type":"ContainerStarted","Data":"1dc52213ec7115d3fc1a90b848866f89821aa7085f49afb61ad0e5a0cd14628c"} Oct 08 09:00:01 crc kubenswrapper[4958]: I1008 09:00:01.245361 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" podStartSLOduration=1.245341371 podStartE2EDuration="1.245341371s" podCreationTimestamp="2025-10-08 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:00:01.242550935 +0000 UTC m=+8744.372243556" watchObservedRunningTime="2025-10-08 09:00:01.245341371 +0000 UTC m=+8744.375033972" Oct 08 09:00:02 crc kubenswrapper[4958]: I1008 09:00:02.245893 4958 generic.go:334] "Generic (PLEG): container finished" podID="71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" containerID="11bc3a32d6a2e9301509358f9562c0a80ce696c4c0cee5cf291c7b763c403aec" exitCode=0 Oct 08 09:00:02 crc kubenswrapper[4958]: I1008 09:00:02.246119 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" event={"ID":"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea","Type":"ContainerDied","Data":"11bc3a32d6a2e9301509358f9562c0a80ce696c4c0cee5cf291c7b763c403aec"} Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.609905 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.692350 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2qhg\" (UniqueName: \"kubernetes.io/projected/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-kube-api-access-g2qhg\") pod \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.692415 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-config-volume\") pod \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.692755 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-secret-volume\") pod \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\" (UID: \"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea\") " Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.693892 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" (UID: "71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.698509 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-kube-api-access-g2qhg" (OuterVolumeSpecName: "kube-api-access-g2qhg") pod "71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" (UID: "71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea"). InnerVolumeSpecName "kube-api-access-g2qhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.700360 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" (UID: "71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.795872 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.795915 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2qhg\" (UniqueName: \"kubernetes.io/projected/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-kube-api-access-g2qhg\") on node \"crc\" DevicePath \"\"" Oct 08 09:00:03 crc kubenswrapper[4958]: I1008 09:00:03.795928 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 09:00:04 crc kubenswrapper[4958]: I1008 09:00:04.276863 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" event={"ID":"71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea","Type":"ContainerDied","Data":"1dc52213ec7115d3fc1a90b848866f89821aa7085f49afb61ad0e5a0cd14628c"} Oct 08 09:00:04 crc kubenswrapper[4958]: I1008 09:00:04.276901 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc52213ec7115d3fc1a90b848866f89821aa7085f49afb61ad0e5a0cd14628c" Oct 08 09:00:04 crc kubenswrapper[4958]: I1008 09:00:04.276917 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331900-pnxp5" Oct 08 09:00:04 crc kubenswrapper[4958]: I1008 09:00:04.350245 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4"] Oct 08 09:00:04 crc kubenswrapper[4958]: I1008 09:00:04.362374 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331855-tqcw4"] Oct 08 09:00:05 crc kubenswrapper[4958]: I1008 09:00:05.589572 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f6f528-cc42-4c5b-a0b1-7440e503f188" path="/var/lib/kubelet/pods/d4f6f528-cc42-4c5b-a0b1-7440e503f188/volumes" Oct 08 09:00:49 crc kubenswrapper[4958]: I1008 09:00:49.269278 4958 scope.go:117] "RemoveContainer" containerID="3f7b08372b887a224468086332697419e8452a13dde277d3db15efc35bd3d50e" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.153994 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29331901-zk49x"] Oct 08 09:01:00 crc kubenswrapper[4958]: E1008 09:01:00.155061 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" containerName="collect-profiles" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.155076 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" containerName="collect-profiles" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.155326 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ab68b7-aff0-4762-9e6b-1abb5ea3e0ea" containerName="collect-profiles" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.156314 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.178497 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29331901-zk49x"] Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.222991 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-combined-ca-bundle\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.223318 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4wr\" (UniqueName: \"kubernetes.io/projected/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-kube-api-access-zn4wr\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.223452 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-fernet-keys\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.223761 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-config-data\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.326140 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-combined-ca-bundle\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.326332 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4wr\" (UniqueName: \"kubernetes.io/projected/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-kube-api-access-zn4wr\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.326403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-fernet-keys\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.326522 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-config-data\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.333837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-combined-ca-bundle\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.335172 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-config-data\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.335583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-fernet-keys\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.357475 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4wr\" (UniqueName: \"kubernetes.io/projected/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-kube-api-access-zn4wr\") pod \"keystone-cron-29331901-zk49x\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:00 crc kubenswrapper[4958]: I1008 09:01:00.488203 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:01 crc kubenswrapper[4958]: I1008 09:01:01.059017 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29331901-zk49x"] Oct 08 09:01:01 crc kubenswrapper[4958]: W1008 09:01:01.070794 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode230b221_1e6c_4ce3_8bfa_ed276364b4f9.slice/crio-b46b8db148a00956d2d8e6f1da5308e2783b01c8ad2933cb4c2a917103fe5527 WatchSource:0}: Error finding container b46b8db148a00956d2d8e6f1da5308e2783b01c8ad2933cb4c2a917103fe5527: Status 404 returned error can't find the container with id b46b8db148a00956d2d8e6f1da5308e2783b01c8ad2933cb4c2a917103fe5527 Oct 08 09:01:02 crc kubenswrapper[4958]: I1008 09:01:02.033773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29331901-zk49x" event={"ID":"e230b221-1e6c-4ce3-8bfa-ed276364b4f9","Type":"ContainerStarted","Data":"06a9dcff2621e3197c512647f8fc1bba09624095a0e22104d5c187fe530d212c"} Oct 08 09:01:02 crc kubenswrapper[4958]: I1008 09:01:02.034200 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29331901-zk49x" event={"ID":"e230b221-1e6c-4ce3-8bfa-ed276364b4f9","Type":"ContainerStarted","Data":"b46b8db148a00956d2d8e6f1da5308e2783b01c8ad2933cb4c2a917103fe5527"} Oct 08 09:01:02 crc kubenswrapper[4958]: I1008 09:01:02.070299 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29331901-zk49x" podStartSLOduration=2.070268091 podStartE2EDuration="2.070268091s" podCreationTimestamp="2025-10-08 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:01:02.060675041 +0000 UTC m=+8805.190367682" watchObservedRunningTime="2025-10-08 09:01:02.070268091 +0000 UTC m=+8805.199960722" Oct 08 09:01:05 crc kubenswrapper[4958]: I1008 09:01:05.067855 4958 generic.go:334] "Generic (PLEG): container finished" podID="e230b221-1e6c-4ce3-8bfa-ed276364b4f9" containerID="06a9dcff2621e3197c512647f8fc1bba09624095a0e22104d5c187fe530d212c" exitCode=0 Oct 08 09:01:05 crc kubenswrapper[4958]: I1008 09:01:05.068009 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29331901-zk49x" event={"ID":"e230b221-1e6c-4ce3-8bfa-ed276364b4f9","Type":"ContainerDied","Data":"06a9dcff2621e3197c512647f8fc1bba09624095a0e22104d5c187fe530d212c"} Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.504569 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.679186 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-fernet-keys\") pod \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.679336 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-combined-ca-bundle\") pod \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.679392 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-config-data\") pod \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.679425 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn4wr\" (UniqueName: \"kubernetes.io/projected/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-kube-api-access-zn4wr\") pod \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\" (UID: \"e230b221-1e6c-4ce3-8bfa-ed276364b4f9\") " Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.687087 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-kube-api-access-zn4wr" (OuterVolumeSpecName: "kube-api-access-zn4wr") pod "e230b221-1e6c-4ce3-8bfa-ed276364b4f9" (UID: "e230b221-1e6c-4ce3-8bfa-ed276364b4f9"). InnerVolumeSpecName "kube-api-access-zn4wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.697124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e230b221-1e6c-4ce3-8bfa-ed276364b4f9" (UID: "e230b221-1e6c-4ce3-8bfa-ed276364b4f9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.733322 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e230b221-1e6c-4ce3-8bfa-ed276364b4f9" (UID: "e230b221-1e6c-4ce3-8bfa-ed276364b4f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.765158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-config-data" (OuterVolumeSpecName: "config-data") pod "e230b221-1e6c-4ce3-8bfa-ed276364b4f9" (UID: "e230b221-1e6c-4ce3-8bfa-ed276364b4f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.781924 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.781978 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn4wr\" (UniqueName: \"kubernetes.io/projected/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-kube-api-access-zn4wr\") on node \"crc\" DevicePath \"\"" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.781994 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 08 09:01:06 crc kubenswrapper[4958]: I1008 09:01:06.782006 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230b221-1e6c-4ce3-8bfa-ed276364b4f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:01:07 crc kubenswrapper[4958]: I1008 09:01:07.105253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29331901-zk49x" event={"ID":"e230b221-1e6c-4ce3-8bfa-ed276364b4f9","Type":"ContainerDied","Data":"b46b8db148a00956d2d8e6f1da5308e2783b01c8ad2933cb4c2a917103fe5527"} Oct 08 09:01:07 crc kubenswrapper[4958]: I1008 09:01:07.105706 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46b8db148a00956d2d8e6f1da5308e2783b01c8ad2933cb4c2a917103fe5527" Oct 08 09:01:07 crc kubenswrapper[4958]: I1008 09:01:07.105807 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29331901-zk49x" Oct 08 09:01:36 crc kubenswrapper[4958]: I1008 09:01:36.845538 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:01:36 crc kubenswrapper[4958]: I1008 09:01:36.846211 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:02:06 crc kubenswrapper[4958]: I1008 09:02:06.845062 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:02:06 crc kubenswrapper[4958]: I1008 09:02:06.845807 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:02:36 crc kubenswrapper[4958]: I1008 09:02:36.846043 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:02:36 crc kubenswrapper[4958]: I1008 09:02:36.846838 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:02:36 crc kubenswrapper[4958]: I1008 09:02:36.848801 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:02:36 crc kubenswrapper[4958]: I1008 09:02:36.850357 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:02:36 crc kubenswrapper[4958]: I1008 09:02:36.850445 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" gracePeriod=600 Oct 08 09:02:36 crc kubenswrapper[4958]: E1008 09:02:36.981015 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:02:37 crc kubenswrapper[4958]: I1008 09:02:37.214119 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" exitCode=0 Oct 08 09:02:37 crc kubenswrapper[4958]: I1008 09:02:37.214176 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e"} Oct 08 09:02:37 crc kubenswrapper[4958]: I1008 09:02:37.214224 4958 scope.go:117] "RemoveContainer" containerID="c096c5b6acc70b4eb4a1c3d42857165f0cb09d28f320bf0fd4b6ce81f76cc35e" Oct 08 09:02:37 crc kubenswrapper[4958]: I1008 09:02:37.215189 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:02:37 crc kubenswrapper[4958]: E1008 09:02:37.215855 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:02:52 crc kubenswrapper[4958]: I1008 09:02:52.576661 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:02:52 crc kubenswrapper[4958]: E1008 09:02:52.577419 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:03:03 crc kubenswrapper[4958]: I1008 09:03:03.578886 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:03:03 crc kubenswrapper[4958]: E1008 09:03:03.580116 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.240736 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pz67b"] Oct 08 09:03:06 crc kubenswrapper[4958]: E1008 09:03:06.241877 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e230b221-1e6c-4ce3-8bfa-ed276364b4f9" containerName="keystone-cron" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.241902 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e230b221-1e6c-4ce3-8bfa-ed276364b4f9" containerName="keystone-cron" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.242238 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e230b221-1e6c-4ce3-8bfa-ed276364b4f9" containerName="keystone-cron" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.244866 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.260141 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pz67b"] Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.333060 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-catalog-content\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.333122 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-utilities\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.333558 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6d44\" (UniqueName: \"kubernetes.io/projected/83ccbc73-3440-4a6a-8a83-ddf88681f24e-kube-api-access-z6d44\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.435838 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-catalog-content\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.435880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-utilities\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.436005 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6d44\" (UniqueName: \"kubernetes.io/projected/83ccbc73-3440-4a6a-8a83-ddf88681f24e-kube-api-access-z6d44\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.436742 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-utilities\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.436784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-catalog-content\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.469059 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6d44\" (UniqueName: \"kubernetes.io/projected/83ccbc73-3440-4a6a-8a83-ddf88681f24e-kube-api-access-z6d44\") pod \"community-operators-pz67b\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:06 crc kubenswrapper[4958]: I1008 09:03:06.584704 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:07 crc kubenswrapper[4958]: I1008 09:03:07.122601 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pz67b"] Oct 08 09:03:07 crc kubenswrapper[4958]: W1008 09:03:07.129387 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83ccbc73_3440_4a6a_8a83_ddf88681f24e.slice/crio-fb7c83472dea45fb1bf672ab5542dd564ef0e2a59dd4036dde84ee9157e9abe4 WatchSource:0}: Error finding container fb7c83472dea45fb1bf672ab5542dd564ef0e2a59dd4036dde84ee9157e9abe4: Status 404 returned error can't find the container with id fb7c83472dea45fb1bf672ab5542dd564ef0e2a59dd4036dde84ee9157e9abe4 Oct 08 09:03:07 crc kubenswrapper[4958]: I1008 09:03:07.619562 4958 generic.go:334] "Generic (PLEG): container finished" podID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerID="a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4" exitCode=0 Oct 08 09:03:07 crc kubenswrapper[4958]: I1008 09:03:07.619653 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerDied","Data":"a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4"} Oct 08 09:03:07 crc kubenswrapper[4958]: I1008 09:03:07.619907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerStarted","Data":"fb7c83472dea45fb1bf672ab5542dd564ef0e2a59dd4036dde84ee9157e9abe4"} Oct 08 09:03:07 crc kubenswrapper[4958]: I1008 09:03:07.625670 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:03:09 crc kubenswrapper[4958]: I1008 09:03:09.648170 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerStarted","Data":"5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2"} Oct 08 09:03:10 crc kubenswrapper[4958]: I1008 09:03:10.659778 4958 generic.go:334] "Generic (PLEG): container finished" podID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerID="5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2" exitCode=0 Oct 08 09:03:10 crc kubenswrapper[4958]: I1008 09:03:10.659869 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerDied","Data":"5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2"} Oct 08 09:03:11 crc kubenswrapper[4958]: I1008 09:03:11.670767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerStarted","Data":"7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2"} Oct 08 09:03:11 crc kubenswrapper[4958]: I1008 09:03:11.696443 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pz67b" podStartSLOduration=2.265392612 podStartE2EDuration="5.696420253s" podCreationTimestamp="2025-10-08 09:03:06 +0000 UTC" firstStartedPulling="2025-10-08 09:03:07.625031836 +0000 UTC m=+8930.754724457" lastFinishedPulling="2025-10-08 09:03:11.056059497 +0000 UTC m=+8934.185752098" observedRunningTime="2025-10-08 09:03:11.688008155 +0000 UTC m=+8934.817700756" watchObservedRunningTime="2025-10-08 09:03:11.696420253 +0000 UTC m=+8934.826112854" Oct 08 09:03:16 crc kubenswrapper[4958]: I1008 09:03:16.579907 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:03:16 crc kubenswrapper[4958]: E1008 09:03:16.581002 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:03:16 crc kubenswrapper[4958]: I1008 09:03:16.585279 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:16 crc kubenswrapper[4958]: I1008 09:03:16.585428 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:16 crc kubenswrapper[4958]: I1008 09:03:16.663103 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:16 crc kubenswrapper[4958]: I1008 09:03:16.784791 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:16 crc kubenswrapper[4958]: I1008 09:03:16.908183 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pz67b"] Oct 08 09:03:18 crc kubenswrapper[4958]: I1008 09:03:18.744806 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pz67b" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="registry-server" containerID="cri-o://7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2" gracePeriod=2 Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.369593 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.454532 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6d44\" (UniqueName: \"kubernetes.io/projected/83ccbc73-3440-4a6a-8a83-ddf88681f24e-kube-api-access-z6d44\") pod \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.454646 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-catalog-content\") pod \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.454731 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-utilities\") pod \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\" (UID: \"83ccbc73-3440-4a6a-8a83-ddf88681f24e\") " Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.455832 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-utilities" (OuterVolumeSpecName: "utilities") pod "83ccbc73-3440-4a6a-8a83-ddf88681f24e" (UID: "83ccbc73-3440-4a6a-8a83-ddf88681f24e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.462910 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ccbc73-3440-4a6a-8a83-ddf88681f24e-kube-api-access-z6d44" (OuterVolumeSpecName: "kube-api-access-z6d44") pod "83ccbc73-3440-4a6a-8a83-ddf88681f24e" (UID: "83ccbc73-3440-4a6a-8a83-ddf88681f24e"). InnerVolumeSpecName "kube-api-access-z6d44". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.557838 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6d44\" (UniqueName: \"kubernetes.io/projected/83ccbc73-3440-4a6a-8a83-ddf88681f24e-kube-api-access-z6d44\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.557883 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.756618 4958 generic.go:334] "Generic (PLEG): container finished" podID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerID="7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2" exitCode=0 Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.756666 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerDied","Data":"7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2"} Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.756696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pz67b" event={"ID":"83ccbc73-3440-4a6a-8a83-ddf88681f24e","Type":"ContainerDied","Data":"fb7c83472dea45fb1bf672ab5542dd564ef0e2a59dd4036dde84ee9157e9abe4"} Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.756719 4958 scope.go:117] "RemoveContainer" containerID="7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.756911 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pz67b" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.784376 4958 scope.go:117] "RemoveContainer" containerID="5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.806075 4958 scope.go:117] "RemoveContainer" containerID="a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.855358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ccbc73-3440-4a6a-8a83-ddf88681f24e" (UID: "83ccbc73-3440-4a6a-8a83-ddf88681f24e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:03:19 crc kubenswrapper[4958]: I1008 09:03:19.866266 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ccbc73-3440-4a6a-8a83-ddf88681f24e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.101298 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pz67b"] Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.114673 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pz67b"] Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.595290 4958 scope.go:117] "RemoveContainer" containerID="7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2" Oct 08 09:03:20 crc kubenswrapper[4958]: E1008 09:03:20.596123 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2\": container with ID starting with 7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2 not found: ID does not exist" containerID="7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2" Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.596192 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2"} err="failed to get container status \"7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2\": rpc error: code = NotFound desc = could not find container \"7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2\": container with ID starting with 7a525597193c0e90f87a126d2dee912da5ae8b373a85e89a0fe1bfaeb8fbafd2 not found: ID does not exist" Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.596219 4958 scope.go:117] "RemoveContainer" containerID="5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2" Oct 08 09:03:20 crc kubenswrapper[4958]: E1008 09:03:20.596602 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2\": container with ID starting with 5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2 not found: ID does not exist" containerID="5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2" Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.596631 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2"} err="failed to get container status \"5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2\": rpc error: code = NotFound desc = could not find container \"5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2\": container with ID starting with 5be857d632067b037a751c4971a8f69639ce1c31863104bcc7f31038db4477e2 not found: ID does not exist" Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.596659 4958 scope.go:117] "RemoveContainer" containerID="a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4" Oct 08 09:03:20 crc kubenswrapper[4958]: E1008 09:03:20.597181 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4\": container with ID starting with a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4 not found: ID does not exist" containerID="a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4" Oct 08 09:03:20 crc kubenswrapper[4958]: I1008 09:03:20.597211 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4"} err="failed to get container status \"a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4\": rpc error: code = NotFound desc = could not find container \"a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4\": container with ID starting with a355eef78955d510d0e344228a0dc4b7f9bb63a23556bc96c58980b97212c5d4 not found: ID does not exist" Oct 08 09:03:21 crc kubenswrapper[4958]: I1008 09:03:21.596986 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" path="/var/lib/kubelet/pods/83ccbc73-3440-4a6a-8a83-ddf88681f24e/volumes" Oct 08 09:03:31 crc kubenswrapper[4958]: I1008 09:03:31.577653 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:03:31 crc kubenswrapper[4958]: E1008 09:03:31.578441 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:03:45 crc kubenswrapper[4958]: I1008 09:03:45.577393 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:03:45 crc kubenswrapper[4958]: E1008 09:03:45.580277 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:03:48 crc kubenswrapper[4958]: I1008 09:03:48.145372 4958 generic.go:334] "Generic (PLEG): container finished" podID="37c235bc-99b8-41ad-a6e5-e735be206363" containerID="9626b9c7a9feb1a94e6e84a0e48cfd1c83d253933157537c3f9f8a54ac15e69c" exitCode=0 Oct 08 09:03:48 crc kubenswrapper[4958]: I1008 09:03:48.145533 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" event={"ID":"37c235bc-99b8-41ad-a6e5-e735be206363","Type":"ContainerDied","Data":"9626b9c7a9feb1a94e6e84a0e48cfd1c83d253933157537c3f9f8a54ac15e69c"} Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.670210 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701013 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-2\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701134 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ssh-key\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701163 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-1\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701255 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-inventory\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701340 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rtw2\" (UniqueName: \"kubernetes.io/projected/37c235bc-99b8-41ad-a6e5-e735be206363-kube-api-access-4rtw2\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701370 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-0\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.701422 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-telemetry-combined-ca-bundle\") pod \"37c235bc-99b8-41ad-a6e5-e735be206363\" (UID: \"37c235bc-99b8-41ad-a6e5-e735be206363\") " Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.708408 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c235bc-99b8-41ad-a6e5-e735be206363-kube-api-access-4rtw2" (OuterVolumeSpecName: "kube-api-access-4rtw2") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "kube-api-access-4rtw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.708747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.737188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-inventory" (OuterVolumeSpecName: "inventory") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.737255 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.746834 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.753741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.755198 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "37c235bc-99b8-41ad-a6e5-e735be206363" (UID: "37c235bc-99b8-41ad-a6e5-e735be206363"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804147 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804192 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rtw2\" (UniqueName: \"kubernetes.io/projected/37c235bc-99b8-41ad-a6e5-e735be206363-kube-api-access-4rtw2\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804214 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804230 4958 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804243 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804256 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:49 crc kubenswrapper[4958]: I1008 09:03:49.804272 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/37c235bc-99b8-41ad-a6e5-e735be206363-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.175931 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" event={"ID":"37c235bc-99b8-41ad-a6e5-e735be206363","Type":"ContainerDied","Data":"47377d195183f804d540b12784de834537029d6e8a81979100769434af81ad86"} Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.176458 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47377d195183f804d540b12784de834537029d6e8a81979100769434af81ad86" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.176212 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-5ggmb" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.393400 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-nzcg9"] Oct 08 09:03:50 crc kubenswrapper[4958]: E1008 09:03:50.394024 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="extract-utilities" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.394044 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="extract-utilities" Oct 08 09:03:50 crc kubenswrapper[4958]: E1008 09:03:50.394064 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="registry-server" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.394076 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="registry-server" Oct 08 09:03:50 crc kubenswrapper[4958]: E1008 09:03:50.394112 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c235bc-99b8-41ad-a6e5-e735be206363" containerName="telemetry-openstack-openstack-cell1" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.394121 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c235bc-99b8-41ad-a6e5-e735be206363" containerName="telemetry-openstack-openstack-cell1" Oct 08 09:03:50 crc kubenswrapper[4958]: E1008 09:03:50.394140 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="extract-content" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.394148 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="extract-content" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.394406 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c235bc-99b8-41ad-a6e5-e735be206363" containerName="telemetry-openstack-openstack-cell1" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.394442 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ccbc73-3440-4a6a-8a83-ddf88681f24e" containerName="registry-server" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.395417 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.397599 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.398782 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.399030 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.399196 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.400534 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.408520 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-nzcg9"] Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.420383 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.420690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.421364 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.421452 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.421625 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj28\" (UniqueName: \"kubernetes.io/projected/e671c885-ccd0-4167-ab0e-434aea504b10-kube-api-access-swj28\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.522413 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.522505 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.522605 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.522637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.522718 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj28\" (UniqueName: \"kubernetes.io/projected/e671c885-ccd0-4167-ab0e-434aea504b10-kube-api-access-swj28\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.528257 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.529081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.531231 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.539880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.552207 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj28\" (UniqueName: \"kubernetes.io/projected/e671c885-ccd0-4167-ab0e-434aea504b10-kube-api-access-swj28\") pod \"neutron-sriov-openstack-openstack-cell1-nzcg9\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:50 crc kubenswrapper[4958]: I1008 09:03:50.724636 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:03:51 crc kubenswrapper[4958]: I1008 09:03:51.325803 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-nzcg9"] Oct 08 09:03:52 crc kubenswrapper[4958]: I1008 09:03:52.228861 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" event={"ID":"e671c885-ccd0-4167-ab0e-434aea504b10","Type":"ContainerStarted","Data":"0e92e5c56d81e3fe0f902a7ebe35f290922cb0af347473323d730f473a6d7b32"} Oct 08 09:03:52 crc kubenswrapper[4958]: I1008 09:03:52.229212 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" event={"ID":"e671c885-ccd0-4167-ab0e-434aea504b10","Type":"ContainerStarted","Data":"ce833bf918afe3f7670bc2be4c00e9f88fd619dc362e434d596a54a77ba1ebae"} Oct 08 09:03:52 crc kubenswrapper[4958]: I1008 09:03:52.246298 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" podStartSLOduration=1.738171852 podStartE2EDuration="2.246281124s" podCreationTimestamp="2025-10-08 09:03:50 +0000 UTC" firstStartedPulling="2025-10-08 09:03:51.329215808 +0000 UTC m=+8974.458908409" lastFinishedPulling="2025-10-08 09:03:51.83732504 +0000 UTC m=+8974.967017681" observedRunningTime="2025-10-08 09:03:52.244227978 +0000 UTC m=+8975.373920579" watchObservedRunningTime="2025-10-08 09:03:52.246281124 +0000 UTC m=+8975.375973725" Oct 08 09:04:00 crc kubenswrapper[4958]: I1008 09:04:00.576404 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:04:00 crc kubenswrapper[4958]: E1008 09:04:00.576989 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:04:15 crc kubenswrapper[4958]: I1008 09:04:15.577383 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:04:15 crc kubenswrapper[4958]: E1008 09:04:15.578203 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:04:28 crc kubenswrapper[4958]: I1008 09:04:28.578633 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:04:28 crc kubenswrapper[4958]: E1008 09:04:28.579834 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:04:43 crc kubenswrapper[4958]: I1008 09:04:43.578608 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:04:43 crc kubenswrapper[4958]: E1008 09:04:43.580007 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:04:58 crc kubenswrapper[4958]: I1008 09:04:58.576890 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:04:58 crc kubenswrapper[4958]: E1008 09:04:58.577699 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:05:10 crc kubenswrapper[4958]: I1008 09:05:10.578134 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:05:10 crc kubenswrapper[4958]: E1008 09:05:10.579624 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:05:24 crc kubenswrapper[4958]: I1008 09:05:24.577329 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:05:24 crc kubenswrapper[4958]: E1008 09:05:24.578709 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:05:38 crc kubenswrapper[4958]: I1008 09:05:38.578200 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:05:38 crc kubenswrapper[4958]: E1008 09:05:38.579741 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:05:49 crc kubenswrapper[4958]: I1008 09:05:49.582386 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:05:49 crc kubenswrapper[4958]: E1008 09:05:49.583767 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:06:01 crc kubenswrapper[4958]: I1008 09:06:01.578112 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:06:01 crc kubenswrapper[4958]: E1008 09:06:01.578818 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:06:16 crc kubenswrapper[4958]: I1008 09:06:16.576790 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:06:16 crc kubenswrapper[4958]: E1008 09:06:16.577568 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:06:28 crc kubenswrapper[4958]: I1008 09:06:28.577466 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:06:28 crc kubenswrapper[4958]: E1008 09:06:28.580038 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:06:42 crc kubenswrapper[4958]: I1008 09:06:42.577324 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:06:42 crc kubenswrapper[4958]: E1008 09:06:42.578631 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:06:54 crc kubenswrapper[4958]: I1008 09:06:54.579589 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:06:54 crc kubenswrapper[4958]: E1008 09:06:54.582592 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:07:05 crc kubenswrapper[4958]: I1008 09:07:05.580501 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:07:05 crc kubenswrapper[4958]: E1008 09:07:05.582526 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:07:20 crc kubenswrapper[4958]: I1008 09:07:20.577658 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:07:20 crc kubenswrapper[4958]: E1008 09:07:20.579114 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.095842 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5n79"] Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.099375 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.122612 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5n79"] Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.164442 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-utilities\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.164505 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-catalog-content\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.164541 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct89f\" (UniqueName: \"kubernetes.io/projected/a067a2e4-fa20-4d23-b437-18c11709342c-kube-api-access-ct89f\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.267048 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-utilities\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.267114 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-catalog-content\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.267150 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct89f\" (UniqueName: \"kubernetes.io/projected/a067a2e4-fa20-4d23-b437-18c11709342c-kube-api-access-ct89f\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.267833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-catalog-content\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.267870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-utilities\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:30 crc kubenswrapper[4958]: I1008 09:07:30.835343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct89f\" (UniqueName: \"kubernetes.io/projected/a067a2e4-fa20-4d23-b437-18c11709342c-kube-api-access-ct89f\") pod \"certified-operators-f5n79\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:31 crc kubenswrapper[4958]: I1008 09:07:31.024092 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:31 crc kubenswrapper[4958]: I1008 09:07:31.490037 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5n79"] Oct 08 09:07:31 crc kubenswrapper[4958]: I1008 09:07:31.988719 4958 generic.go:334] "Generic (PLEG): container finished" podID="a067a2e4-fa20-4d23-b437-18c11709342c" containerID="c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e" exitCode=0 Oct 08 09:07:31 crc kubenswrapper[4958]: I1008 09:07:31.988803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerDied","Data":"c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e"} Oct 08 09:07:31 crc kubenswrapper[4958]: I1008 09:07:31.988883 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerStarted","Data":"2974640aa9739b0e1551ac3d704af9536f421e1f0b2846364b70b9eade9a19f2"} Oct 08 09:07:34 crc kubenswrapper[4958]: I1008 09:07:34.027374 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerStarted","Data":"541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9"} Oct 08 09:07:35 crc kubenswrapper[4958]: I1008 09:07:35.040686 4958 generic.go:334] "Generic (PLEG): container finished" podID="a067a2e4-fa20-4d23-b437-18c11709342c" containerID="541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9" exitCode=0 Oct 08 09:07:35 crc kubenswrapper[4958]: I1008 09:07:35.040804 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerDied","Data":"541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9"} Oct 08 09:07:35 crc kubenswrapper[4958]: I1008 09:07:35.578265 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:07:35 crc kubenswrapper[4958]: E1008 09:07:35.579395 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:07:36 crc kubenswrapper[4958]: I1008 09:07:36.056630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerStarted","Data":"a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc"} Oct 08 09:07:36 crc kubenswrapper[4958]: I1008 09:07:36.090902 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5n79" podStartSLOduration=2.479661046 podStartE2EDuration="6.090878361s" podCreationTimestamp="2025-10-08 09:07:30 +0000 UTC" firstStartedPulling="2025-10-08 09:07:31.991898846 +0000 UTC m=+9195.121591487" lastFinishedPulling="2025-10-08 09:07:35.603116161 +0000 UTC m=+9198.732808802" observedRunningTime="2025-10-08 09:07:36.085375032 +0000 UTC m=+9199.215067633" watchObservedRunningTime="2025-10-08 09:07:36.090878361 +0000 UTC m=+9199.220570972" Oct 08 09:07:41 crc kubenswrapper[4958]: I1008 09:07:41.024222 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:41 crc kubenswrapper[4958]: I1008 09:07:41.025419 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:41 crc kubenswrapper[4958]: I1008 09:07:41.089330 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:41 crc kubenswrapper[4958]: I1008 09:07:41.210412 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:41 crc kubenswrapper[4958]: I1008 09:07:41.336323 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5n79"] Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.182083 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5n79" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="registry-server" containerID="cri-o://a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc" gracePeriod=2 Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.741568 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.813339 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct89f\" (UniqueName: \"kubernetes.io/projected/a067a2e4-fa20-4d23-b437-18c11709342c-kube-api-access-ct89f\") pod \"a067a2e4-fa20-4d23-b437-18c11709342c\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.813646 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-utilities\") pod \"a067a2e4-fa20-4d23-b437-18c11709342c\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.813879 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-catalog-content\") pod \"a067a2e4-fa20-4d23-b437-18c11709342c\" (UID: \"a067a2e4-fa20-4d23-b437-18c11709342c\") " Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.816087 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-utilities" (OuterVolumeSpecName: "utilities") pod "a067a2e4-fa20-4d23-b437-18c11709342c" (UID: "a067a2e4-fa20-4d23-b437-18c11709342c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.823293 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a067a2e4-fa20-4d23-b437-18c11709342c-kube-api-access-ct89f" (OuterVolumeSpecName: "kube-api-access-ct89f") pod "a067a2e4-fa20-4d23-b437-18c11709342c" (UID: "a067a2e4-fa20-4d23-b437-18c11709342c"). InnerVolumeSpecName "kube-api-access-ct89f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.900771 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a067a2e4-fa20-4d23-b437-18c11709342c" (UID: "a067a2e4-fa20-4d23-b437-18c11709342c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.917062 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.917109 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a067a2e4-fa20-4d23-b437-18c11709342c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:07:43 crc kubenswrapper[4958]: I1008 09:07:43.917127 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct89f\" (UniqueName: \"kubernetes.io/projected/a067a2e4-fa20-4d23-b437-18c11709342c-kube-api-access-ct89f\") on node \"crc\" DevicePath \"\"" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.196772 4958 generic.go:334] "Generic (PLEG): container finished" podID="a067a2e4-fa20-4d23-b437-18c11709342c" containerID="a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc" exitCode=0 Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.196818 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerDied","Data":"a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc"} Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.196860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5n79" event={"ID":"a067a2e4-fa20-4d23-b437-18c11709342c","Type":"ContainerDied","Data":"2974640aa9739b0e1551ac3d704af9536f421e1f0b2846364b70b9eade9a19f2"} Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.196881 4958 scope.go:117] "RemoveContainer" containerID="a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.196945 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5n79" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.246691 4958 scope.go:117] "RemoveContainer" containerID="541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.262103 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5n79"] Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.276445 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5n79"] Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.281154 4958 scope.go:117] "RemoveContainer" containerID="c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.372712 4958 scope.go:117] "RemoveContainer" containerID="a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc" Oct 08 09:07:44 crc kubenswrapper[4958]: E1008 09:07:44.373273 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc\": container with ID starting with a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc not found: ID does not exist" containerID="a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.373365 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc"} err="failed to get container status \"a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc\": rpc error: code = NotFound desc = could not find container \"a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc\": container with ID starting with a778336617c55d6f01a58d18845bf2c14802b4e79210aef15539207b4f2c7fdc not found: ID does not exist" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.373421 4958 scope.go:117] "RemoveContainer" containerID="541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9" Oct 08 09:07:44 crc kubenswrapper[4958]: E1008 09:07:44.374086 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9\": container with ID starting with 541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9 not found: ID does not exist" containerID="541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.374159 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9"} err="failed to get container status \"541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9\": rpc error: code = NotFound desc = could not find container \"541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9\": container with ID starting with 541cf951f82b45c81e8fbf1dea7d2fc0d643ebe37e713f0b8fd2018e9057b6d9 not found: ID does not exist" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.374198 4958 scope.go:117] "RemoveContainer" containerID="c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e" Oct 08 09:07:44 crc kubenswrapper[4958]: E1008 09:07:44.374576 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e\": container with ID starting with c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e not found: ID does not exist" containerID="c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e" Oct 08 09:07:44 crc kubenswrapper[4958]: I1008 09:07:44.374647 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e"} err="failed to get container status \"c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e\": rpc error: code = NotFound desc = could not find container \"c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e\": container with ID starting with c1ef6d9317aa32e94b7469b1396507bce202e0f1e9c1be5feaa8e73b68e88e8e not found: ID does not exist" Oct 08 09:07:45 crc kubenswrapper[4958]: I1008 09:07:45.598634 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" path="/var/lib/kubelet/pods/a067a2e4-fa20-4d23-b437-18c11709342c/volumes" Oct 08 09:07:49 crc kubenswrapper[4958]: I1008 09:07:49.577368 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:07:50 crc kubenswrapper[4958]: I1008 09:07:50.288201 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"7c26b4d67a6b0ba10f3f0000e8595e7097309485ee9d91bb9a9c7136fe40513d"} Oct 08 09:08:01 crc kubenswrapper[4958]: I1008 09:08:01.993581 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jl7tq"] Oct 08 09:08:01 crc kubenswrapper[4958]: E1008 09:08:01.995264 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="registry-server" Oct 08 09:08:01 crc kubenswrapper[4958]: I1008 09:08:01.995300 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="registry-server" Oct 08 09:08:01 crc kubenswrapper[4958]: E1008 09:08:01.995337 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="extract-content" Oct 08 09:08:01 crc kubenswrapper[4958]: I1008 09:08:01.995355 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="extract-content" Oct 08 09:08:01 crc kubenswrapper[4958]: E1008 09:08:01.995392 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="extract-utilities" Oct 08 09:08:01 crc kubenswrapper[4958]: I1008 09:08:01.995409 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="extract-utilities" Oct 08 09:08:01 crc kubenswrapper[4958]: I1008 09:08:01.995993 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a067a2e4-fa20-4d23-b437-18c11709342c" containerName="registry-server" Oct 08 09:08:01 crc kubenswrapper[4958]: I1008 09:08:01.999548 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.019668 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl7tq"] Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.090286 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-utilities\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.090371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgv2q\" (UniqueName: \"kubernetes.io/projected/dbdbaab7-5edf-4262-97be-669c38a3af3a-kube-api-access-qgv2q\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.090610 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-catalog-content\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.191990 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-catalog-content\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.192272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-utilities\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.192370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgv2q\" (UniqueName: \"kubernetes.io/projected/dbdbaab7-5edf-4262-97be-669c38a3af3a-kube-api-access-qgv2q\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.192517 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-catalog-content\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.192865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-utilities\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.217872 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgv2q\" (UniqueName: \"kubernetes.io/projected/dbdbaab7-5edf-4262-97be-669c38a3af3a-kube-api-access-qgv2q\") pod \"redhat-operators-jl7tq\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.356614 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:02 crc kubenswrapper[4958]: I1008 09:08:02.829638 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl7tq"] Oct 08 09:08:03 crc kubenswrapper[4958]: I1008 09:08:03.450468 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerStarted","Data":"07260f3fddd098c2d24c2a09c60c8ab35ce1af1d029001045fc171e3e88164db"} Oct 08 09:08:04 crc kubenswrapper[4958]: I1008 09:08:04.468839 4958 generic.go:334] "Generic (PLEG): container finished" podID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerID="798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672" exitCode=0 Oct 08 09:08:04 crc kubenswrapper[4958]: I1008 09:08:04.468921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerDied","Data":"798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672"} Oct 08 09:08:06 crc kubenswrapper[4958]: I1008 09:08:06.490854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerStarted","Data":"a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28"} Oct 08 09:08:10 crc kubenswrapper[4958]: I1008 09:08:10.557526 4958 generic.go:334] "Generic (PLEG): container finished" podID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerID="a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28" exitCode=0 Oct 08 09:08:10 crc kubenswrapper[4958]: I1008 09:08:10.558131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerDied","Data":"a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28"} Oct 08 09:08:10 crc kubenswrapper[4958]: I1008 09:08:10.562688 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:08:11 crc kubenswrapper[4958]: I1008 09:08:11.594816 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerStarted","Data":"c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534"} Oct 08 09:08:11 crc kubenswrapper[4958]: I1008 09:08:11.615353 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jl7tq" podStartSLOduration=4.072272712 podStartE2EDuration="10.615332659s" podCreationTimestamp="2025-10-08 09:08:01 +0000 UTC" firstStartedPulling="2025-10-08 09:08:04.47413201 +0000 UTC m=+9227.603824661" lastFinishedPulling="2025-10-08 09:08:11.017191997 +0000 UTC m=+9234.146884608" observedRunningTime="2025-10-08 09:08:11.613061387 +0000 UTC m=+9234.742753988" watchObservedRunningTime="2025-10-08 09:08:11.615332659 +0000 UTC m=+9234.745025260" Oct 08 09:08:12 crc kubenswrapper[4958]: I1008 09:08:12.357115 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:12 crc kubenswrapper[4958]: I1008 09:08:12.357162 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:13 crc kubenswrapper[4958]: I1008 09:08:13.491275 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jl7tq" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="registry-server" probeResult="failure" output=< Oct 08 09:08:13 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 09:08:13 crc kubenswrapper[4958]: > Oct 08 09:08:22 crc kubenswrapper[4958]: I1008 09:08:22.411381 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:22 crc kubenswrapper[4958]: I1008 09:08:22.469396 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:22 crc kubenswrapper[4958]: I1008 09:08:22.646091 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl7tq"] Oct 08 09:08:23 crc kubenswrapper[4958]: I1008 09:08:23.773448 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jl7tq" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="registry-server" containerID="cri-o://c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534" gracePeriod=2 Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.425277 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.522991 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-utilities\") pod \"dbdbaab7-5edf-4262-97be-669c38a3af3a\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.523154 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-catalog-content\") pod \"dbdbaab7-5edf-4262-97be-669c38a3af3a\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.523272 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgv2q\" (UniqueName: \"kubernetes.io/projected/dbdbaab7-5edf-4262-97be-669c38a3af3a-kube-api-access-qgv2q\") pod \"dbdbaab7-5edf-4262-97be-669c38a3af3a\" (UID: \"dbdbaab7-5edf-4262-97be-669c38a3af3a\") " Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.524035 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-utilities" (OuterVolumeSpecName: "utilities") pod "dbdbaab7-5edf-4262-97be-669c38a3af3a" (UID: "dbdbaab7-5edf-4262-97be-669c38a3af3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.529176 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdbaab7-5edf-4262-97be-669c38a3af3a-kube-api-access-qgv2q" (OuterVolumeSpecName: "kube-api-access-qgv2q") pod "dbdbaab7-5edf-4262-97be-669c38a3af3a" (UID: "dbdbaab7-5edf-4262-97be-669c38a3af3a"). InnerVolumeSpecName "kube-api-access-qgv2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.614541 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbdbaab7-5edf-4262-97be-669c38a3af3a" (UID: "dbdbaab7-5edf-4262-97be-669c38a3af3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.626035 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.626074 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgv2q\" (UniqueName: \"kubernetes.io/projected/dbdbaab7-5edf-4262-97be-669c38a3af3a-kube-api-access-qgv2q\") on node \"crc\" DevicePath \"\"" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.626086 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbdbaab7-5edf-4262-97be-669c38a3af3a-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.788442 4958 generic.go:334] "Generic (PLEG): container finished" podID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerID="c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534" exitCode=0 Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.788479 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerDied","Data":"c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534"} Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.788512 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl7tq" event={"ID":"dbdbaab7-5edf-4262-97be-669c38a3af3a","Type":"ContainerDied","Data":"07260f3fddd098c2d24c2a09c60c8ab35ce1af1d029001045fc171e3e88164db"} Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.788522 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl7tq" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.788531 4958 scope.go:117] "RemoveContainer" containerID="c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.832140 4958 scope.go:117] "RemoveContainer" containerID="a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.855586 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl7tq"] Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.865617 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jl7tq"] Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.865745 4958 scope.go:117] "RemoveContainer" containerID="798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.921504 4958 scope.go:117] "RemoveContainer" containerID="c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534" Oct 08 09:08:24 crc kubenswrapper[4958]: E1008 09:08:24.922133 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534\": container with ID starting with c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534 not found: ID does not exist" containerID="c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.922181 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534"} err="failed to get container status \"c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534\": rpc error: code = NotFound desc = could not find container \"c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534\": container with ID starting with c829ee826e893c03ab9d4b424937e8cd84dc40a666d7773d8f5f58aedd079534 not found: ID does not exist" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.922209 4958 scope.go:117] "RemoveContainer" containerID="a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28" Oct 08 09:08:24 crc kubenswrapper[4958]: E1008 09:08:24.922627 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28\": container with ID starting with a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28 not found: ID does not exist" containerID="a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.922683 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28"} err="failed to get container status \"a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28\": rpc error: code = NotFound desc = could not find container \"a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28\": container with ID starting with a4df44e3bb24ca1a8caea49a843d0acf1335e4357c2c5a0bc2d24677d1a8ff28 not found: ID does not exist" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.922723 4958 scope.go:117] "RemoveContainer" containerID="798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672" Oct 08 09:08:24 crc kubenswrapper[4958]: E1008 09:08:24.923166 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672\": container with ID starting with 798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672 not found: ID does not exist" containerID="798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672" Oct 08 09:08:24 crc kubenswrapper[4958]: I1008 09:08:24.923193 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672"} err="failed to get container status \"798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672\": rpc error: code = NotFound desc = could not find container \"798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672\": container with ID starting with 798da4a39fe5eef1f60d1eb16bbe486a26fbe30172901d1eaa95d7dc838fb672 not found: ID does not exist" Oct 08 09:08:25 crc kubenswrapper[4958]: I1008 09:08:25.602931 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" path="/var/lib/kubelet/pods/dbdbaab7-5edf-4262-97be-669c38a3af3a/volumes" Oct 08 09:08:45 crc kubenswrapper[4958]: I1008 09:08:45.859862 4958 trace.go:236] Trace[1157790863]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-1" (08-Oct-2025 09:08:44.845) (total time: 1014ms): Oct 08 09:08:45 crc kubenswrapper[4958]: Trace[1157790863]: [1.014031474s] [1.014031474s] END Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.705050 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jhj7w"] Oct 08 09:09:28 crc kubenswrapper[4958]: E1008 09:09:28.706216 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="extract-content" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.706231 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="extract-content" Oct 08 09:09:28 crc kubenswrapper[4958]: E1008 09:09:28.706255 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="extract-utilities" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.706264 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="extract-utilities" Oct 08 09:09:28 crc kubenswrapper[4958]: E1008 09:09:28.706276 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="registry-server" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.706283 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="registry-server" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.706526 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdbaab7-5edf-4262-97be-669c38a3af3a" containerName="registry-server" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.708101 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.722309 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhj7w"] Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.772719 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss5fh\" (UniqueName: \"kubernetes.io/projected/914caf43-5fa9-4384-970b-78abeb92dd74-kube-api-access-ss5fh\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.772813 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-utilities\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.772884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-catalog-content\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.874658 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss5fh\" (UniqueName: \"kubernetes.io/projected/914caf43-5fa9-4384-970b-78abeb92dd74-kube-api-access-ss5fh\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.874768 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-utilities\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.874853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-catalog-content\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.875456 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-catalog-content\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.875801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-utilities\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:28 crc kubenswrapper[4958]: I1008 09:09:28.905075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss5fh\" (UniqueName: \"kubernetes.io/projected/914caf43-5fa9-4384-970b-78abeb92dd74-kube-api-access-ss5fh\") pod \"redhat-marketplace-jhj7w\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:29 crc kubenswrapper[4958]: I1008 09:09:29.047107 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:29 crc kubenswrapper[4958]: I1008 09:09:29.630844 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhj7w"] Oct 08 09:09:30 crc kubenswrapper[4958]: I1008 09:09:30.657798 4958 generic.go:334] "Generic (PLEG): container finished" podID="914caf43-5fa9-4384-970b-78abeb92dd74" containerID="c8074ef648cdf436a25c785e4e6def3eeb3a73cd2f31b22e79ed64486868d499" exitCode=0 Oct 08 09:09:30 crc kubenswrapper[4958]: I1008 09:09:30.657887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhj7w" event={"ID":"914caf43-5fa9-4384-970b-78abeb92dd74","Type":"ContainerDied","Data":"c8074ef648cdf436a25c785e4e6def3eeb3a73cd2f31b22e79ed64486868d499"} Oct 08 09:09:30 crc kubenswrapper[4958]: I1008 09:09:30.658483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhj7w" event={"ID":"914caf43-5fa9-4384-970b-78abeb92dd74","Type":"ContainerStarted","Data":"95b5065172cbd8a5efeca5d0043ca82c20e1291aec3f2cf8fb08b5b44655b4aa"} Oct 08 09:09:32 crc kubenswrapper[4958]: I1008 09:09:32.685974 4958 generic.go:334] "Generic (PLEG): container finished" podID="914caf43-5fa9-4384-970b-78abeb92dd74" containerID="89f71759edba0a0b4c8106cbcae6d736aa4471cc18aa02fafcc7ad9591e83700" exitCode=0 Oct 08 09:09:32 crc kubenswrapper[4958]: I1008 09:09:32.686082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhj7w" event={"ID":"914caf43-5fa9-4384-970b-78abeb92dd74","Type":"ContainerDied","Data":"89f71759edba0a0b4c8106cbcae6d736aa4471cc18aa02fafcc7ad9591e83700"} Oct 08 09:09:33 crc kubenswrapper[4958]: I1008 09:09:33.703644 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhj7w" event={"ID":"914caf43-5fa9-4384-970b-78abeb92dd74","Type":"ContainerStarted","Data":"b7fefd46583db0d9ee7bf84db197dcb76e2cd4840a245fc926b7996e91b45bb0"} Oct 08 09:09:33 crc kubenswrapper[4958]: I1008 09:09:33.728618 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jhj7w" podStartSLOduration=3.182526947 podStartE2EDuration="5.728584203s" podCreationTimestamp="2025-10-08 09:09:28 +0000 UTC" firstStartedPulling="2025-10-08 09:09:30.660851928 +0000 UTC m=+9313.790544549" lastFinishedPulling="2025-10-08 09:09:33.206909194 +0000 UTC m=+9316.336601805" observedRunningTime="2025-10-08 09:09:33.726017344 +0000 UTC m=+9316.855709985" watchObservedRunningTime="2025-10-08 09:09:33.728584203 +0000 UTC m=+9316.858276804" Oct 08 09:09:39 crc kubenswrapper[4958]: I1008 09:09:39.047830 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:39 crc kubenswrapper[4958]: I1008 09:09:39.048306 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:39 crc kubenswrapper[4958]: I1008 09:09:39.110146 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:39 crc kubenswrapper[4958]: I1008 09:09:39.857980 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:39 crc kubenswrapper[4958]: I1008 09:09:39.914376 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhj7w"] Oct 08 09:09:41 crc kubenswrapper[4958]: I1008 09:09:41.786161 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jhj7w" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="registry-server" containerID="cri-o://b7fefd46583db0d9ee7bf84db197dcb76e2cd4840a245fc926b7996e91b45bb0" gracePeriod=2 Oct 08 09:09:42 crc kubenswrapper[4958]: I1008 09:09:42.798341 4958 generic.go:334] "Generic (PLEG): container finished" podID="914caf43-5fa9-4384-970b-78abeb92dd74" containerID="b7fefd46583db0d9ee7bf84db197dcb76e2cd4840a245fc926b7996e91b45bb0" exitCode=0 Oct 08 09:09:42 crc kubenswrapper[4958]: I1008 09:09:42.798423 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhj7w" event={"ID":"914caf43-5fa9-4384-970b-78abeb92dd74","Type":"ContainerDied","Data":"b7fefd46583db0d9ee7bf84db197dcb76e2cd4840a245fc926b7996e91b45bb0"} Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.020356 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.145281 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-catalog-content\") pod \"914caf43-5fa9-4384-970b-78abeb92dd74\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.145385 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-utilities\") pod \"914caf43-5fa9-4384-970b-78abeb92dd74\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.145626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss5fh\" (UniqueName: \"kubernetes.io/projected/914caf43-5fa9-4384-970b-78abeb92dd74-kube-api-access-ss5fh\") pod \"914caf43-5fa9-4384-970b-78abeb92dd74\" (UID: \"914caf43-5fa9-4384-970b-78abeb92dd74\") " Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.147456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-utilities" (OuterVolumeSpecName: "utilities") pod "914caf43-5fa9-4384-970b-78abeb92dd74" (UID: "914caf43-5fa9-4384-970b-78abeb92dd74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.154905 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914caf43-5fa9-4384-970b-78abeb92dd74-kube-api-access-ss5fh" (OuterVolumeSpecName: "kube-api-access-ss5fh") pod "914caf43-5fa9-4384-970b-78abeb92dd74" (UID: "914caf43-5fa9-4384-970b-78abeb92dd74"). InnerVolumeSpecName "kube-api-access-ss5fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.158929 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "914caf43-5fa9-4384-970b-78abeb92dd74" (UID: "914caf43-5fa9-4384-970b-78abeb92dd74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.248165 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss5fh\" (UniqueName: \"kubernetes.io/projected/914caf43-5fa9-4384-970b-78abeb92dd74-kube-api-access-ss5fh\") on node \"crc\" DevicePath \"\"" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.248208 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.248227 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914caf43-5fa9-4384-970b-78abeb92dd74-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.820172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhj7w" event={"ID":"914caf43-5fa9-4384-970b-78abeb92dd74","Type":"ContainerDied","Data":"95b5065172cbd8a5efeca5d0043ca82c20e1291aec3f2cf8fb08b5b44655b4aa"} Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.820560 4958 scope.go:117] "RemoveContainer" containerID="b7fefd46583db0d9ee7bf84db197dcb76e2cd4840a245fc926b7996e91b45bb0" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.820418 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhj7w" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.868570 4958 scope.go:117] "RemoveContainer" containerID="89f71759edba0a0b4c8106cbcae6d736aa4471cc18aa02fafcc7ad9591e83700" Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.875105 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhj7w"] Oct 08 09:09:43 crc kubenswrapper[4958]: I1008 09:09:43.890200 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhj7w"] Oct 08 09:09:44 crc kubenswrapper[4958]: I1008 09:09:44.059000 4958 scope.go:117] "RemoveContainer" containerID="c8074ef648cdf436a25c785e4e6def3eeb3a73cd2f31b22e79ed64486868d499" Oct 08 09:09:45 crc kubenswrapper[4958]: I1008 09:09:45.595915 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" path="/var/lib/kubelet/pods/914caf43-5fa9-4384-970b-78abeb92dd74/volumes" Oct 08 09:10:06 crc kubenswrapper[4958]: I1008 09:10:06.844712 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:10:06 crc kubenswrapper[4958]: I1008 09:10:06.845393 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:10:09 crc kubenswrapper[4958]: I1008 09:10:09.165464 4958 generic.go:334] "Generic (PLEG): container finished" podID="e671c885-ccd0-4167-ab0e-434aea504b10" containerID="0e92e5c56d81e3fe0f902a7ebe35f290922cb0af347473323d730f473a6d7b32" exitCode=0 Oct 08 09:10:09 crc kubenswrapper[4958]: I1008 09:10:09.165626 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" event={"ID":"e671c885-ccd0-4167-ab0e-434aea504b10","Type":"ContainerDied","Data":"0e92e5c56d81e3fe0f902a7ebe35f290922cb0af347473323d730f473a6d7b32"} Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.851696 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.938135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-agent-neutron-config-0\") pod \"e671c885-ccd0-4167-ab0e-434aea504b10\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.938222 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swj28\" (UniqueName: \"kubernetes.io/projected/e671c885-ccd0-4167-ab0e-434aea504b10-kube-api-access-swj28\") pod \"e671c885-ccd0-4167-ab0e-434aea504b10\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.938282 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-inventory\") pod \"e671c885-ccd0-4167-ab0e-434aea504b10\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.938309 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-ssh-key\") pod \"e671c885-ccd0-4167-ab0e-434aea504b10\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.938406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-combined-ca-bundle\") pod \"e671c885-ccd0-4167-ab0e-434aea504b10\" (UID: \"e671c885-ccd0-4167-ab0e-434aea504b10\") " Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.943920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e671c885-ccd0-4167-ab0e-434aea504b10-kube-api-access-swj28" (OuterVolumeSpecName: "kube-api-access-swj28") pod "e671c885-ccd0-4167-ab0e-434aea504b10" (UID: "e671c885-ccd0-4167-ab0e-434aea504b10"). InnerVolumeSpecName "kube-api-access-swj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.946156 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "e671c885-ccd0-4167-ab0e-434aea504b10" (UID: "e671c885-ccd0-4167-ab0e-434aea504b10"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.986833 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e671c885-ccd0-4167-ab0e-434aea504b10" (UID: "e671c885-ccd0-4167-ab0e-434aea504b10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.991990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-inventory" (OuterVolumeSpecName: "inventory") pod "e671c885-ccd0-4167-ab0e-434aea504b10" (UID: "e671c885-ccd0-4167-ab0e-434aea504b10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:10:10 crc kubenswrapper[4958]: I1008 09:10:10.996597 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "e671c885-ccd0-4167-ab0e-434aea504b10" (UID: "e671c885-ccd0-4167-ab0e-434aea504b10"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.041276 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.041319 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swj28\" (UniqueName: \"kubernetes.io/projected/e671c885-ccd0-4167-ab0e-434aea504b10-kube-api-access-swj28\") on node \"crc\" DevicePath \"\"" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.041337 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.041349 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.041362 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e671c885-ccd0-4167-ab0e-434aea504b10-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.190155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" event={"ID":"e671c885-ccd0-4167-ab0e-434aea504b10","Type":"ContainerDied","Data":"ce833bf918afe3f7670bc2be4c00e9f88fd619dc362e434d596a54a77ba1ebae"} Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.190202 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce833bf918afe3f7670bc2be4c00e9f88fd619dc362e434d596a54a77ba1ebae" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.190267 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-nzcg9" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.346791 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr"] Oct 08 09:10:11 crc kubenswrapper[4958]: E1008 09:10:11.347556 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e671c885-ccd0-4167-ab0e-434aea504b10" containerName="neutron-sriov-openstack-openstack-cell1" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.347574 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e671c885-ccd0-4167-ab0e-434aea504b10" containerName="neutron-sriov-openstack-openstack-cell1" Oct 08 09:10:11 crc kubenswrapper[4958]: E1008 09:10:11.347595 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="extract-utilities" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.347602 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="extract-utilities" Oct 08 09:10:11 crc kubenswrapper[4958]: E1008 09:10:11.347622 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="registry-server" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.347628 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="registry-server" Oct 08 09:10:11 crc kubenswrapper[4958]: E1008 09:10:11.347644 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="extract-content" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.347650 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="extract-content" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.347879 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e671c885-ccd0-4167-ab0e-434aea504b10" containerName="neutron-sriov-openstack-openstack-cell1" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.347898 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="914caf43-5fa9-4384-970b-78abeb92dd74" containerName="registry-server" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.348704 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.351614 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.351859 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.352127 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.352300 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.354565 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.363661 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr"] Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.450348 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fm6m\" (UniqueName: \"kubernetes.io/projected/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-kube-api-access-5fm6m\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.450415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.450496 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.450580 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.450703 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.552835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fm6m\" (UniqueName: \"kubernetes.io/projected/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-kube-api-access-5fm6m\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.552918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.553039 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.553097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.553240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.560151 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.560584 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.560604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.567868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.575782 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fm6m\" (UniqueName: \"kubernetes.io/projected/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-kube-api-access-5fm6m\") pod \"neutron-dhcp-openstack-openstack-cell1-xsjwr\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:11 crc kubenswrapper[4958]: I1008 09:10:11.676430 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:10:12 crc kubenswrapper[4958]: I1008 09:10:12.305803 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr"] Oct 08 09:10:12 crc kubenswrapper[4958]: W1008 09:10:12.315186 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbda6e9bf_819f_47bc_b19c_06c5aea5f9d4.slice/crio-8f7e1a983fb79607ec8d6be399c9d5fb72dbdde43b2c3849c248caa4677a610f WatchSource:0}: Error finding container 8f7e1a983fb79607ec8d6be399c9d5fb72dbdde43b2c3849c248caa4677a610f: Status 404 returned error can't find the container with id 8f7e1a983fb79607ec8d6be399c9d5fb72dbdde43b2c3849c248caa4677a610f Oct 08 09:10:13 crc kubenswrapper[4958]: I1008 09:10:13.214058 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" event={"ID":"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4","Type":"ContainerStarted","Data":"802609d13650914bde61400ad2634f2fb1df38d0235492de07d7814b70dcecdc"} Oct 08 09:10:13 crc kubenswrapper[4958]: I1008 09:10:13.214846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" event={"ID":"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4","Type":"ContainerStarted","Data":"8f7e1a983fb79607ec8d6be399c9d5fb72dbdde43b2c3849c248caa4677a610f"} Oct 08 09:10:13 crc kubenswrapper[4958]: I1008 09:10:13.249211 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" podStartSLOduration=1.6964207660000001 podStartE2EDuration="2.249192118s" podCreationTimestamp="2025-10-08 09:10:11 +0000 UTC" firstStartedPulling="2025-10-08 09:10:12.321511455 +0000 UTC m=+9355.451204076" lastFinishedPulling="2025-10-08 09:10:12.874282827 +0000 UTC m=+9356.003975428" observedRunningTime="2025-10-08 09:10:13.24371782 +0000 UTC m=+9356.373410461" watchObservedRunningTime="2025-10-08 09:10:13.249192118 +0000 UTC m=+9356.378884719" Oct 08 09:10:36 crc kubenswrapper[4958]: I1008 09:10:36.845567 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:10:36 crc kubenswrapper[4958]: I1008 09:10:36.846356 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:11:06 crc kubenswrapper[4958]: I1008 09:11:06.844978 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:11:06 crc kubenswrapper[4958]: I1008 09:11:06.845678 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:11:06 crc kubenswrapper[4958]: I1008 09:11:06.845733 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:11:06 crc kubenswrapper[4958]: I1008 09:11:06.846436 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c26b4d67a6b0ba10f3f0000e8595e7097309485ee9d91bb9a9c7136fe40513d"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:11:06 crc kubenswrapper[4958]: I1008 09:11:06.846500 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://7c26b4d67a6b0ba10f3f0000e8595e7097309485ee9d91bb9a9c7136fe40513d" gracePeriod=600 Oct 08 09:11:07 crc kubenswrapper[4958]: I1008 09:11:07.887140 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="7c26b4d67a6b0ba10f3f0000e8595e7097309485ee9d91bb9a9c7136fe40513d" exitCode=0 Oct 08 09:11:07 crc kubenswrapper[4958]: I1008 09:11:07.887248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"7c26b4d67a6b0ba10f3f0000e8595e7097309485ee9d91bb9a9c7136fe40513d"} Oct 08 09:11:07 crc kubenswrapper[4958]: I1008 09:11:07.887671 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985"} Oct 08 09:11:07 crc kubenswrapper[4958]: I1008 09:11:07.887694 4958 scope.go:117] "RemoveContainer" containerID="84b81309842c4074f47693d67410854eb39e423b9394c2061260621487cff79e" Oct 08 09:13:36 crc kubenswrapper[4958]: I1008 09:13:36.844782 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:13:36 crc kubenswrapper[4958]: I1008 09:13:36.845574 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:14:06 crc kubenswrapper[4958]: I1008 09:14:06.845744 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:14:06 crc kubenswrapper[4958]: I1008 09:14:06.847193 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.215677 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xdzsj"] Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.238574 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.264666 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdzsj"] Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.382672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-catalog-content\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.382804 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94mb\" (UniqueName: \"kubernetes.io/projected/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-kube-api-access-d94mb\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.383079 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-utilities\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.484923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-utilities\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.485064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-catalog-content\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.485491 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-utilities\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.485577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-catalog-content\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.485106 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d94mb\" (UniqueName: \"kubernetes.io/projected/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-kube-api-access-d94mb\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.505549 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d94mb\" (UniqueName: \"kubernetes.io/projected/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-kube-api-access-d94mb\") pod \"community-operators-xdzsj\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:14 crc kubenswrapper[4958]: I1008 09:14:14.568498 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:15 crc kubenswrapper[4958]: I1008 09:14:15.158922 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdzsj"] Oct 08 09:14:15 crc kubenswrapper[4958]: I1008 09:14:15.300633 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerStarted","Data":"806b54290655ecb41c4cf5832b36fd48e12d404283836d951271a06400ad6b11"} Oct 08 09:14:16 crc kubenswrapper[4958]: I1008 09:14:16.319891 4958 generic.go:334] "Generic (PLEG): container finished" podID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerID="a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a" exitCode=0 Oct 08 09:14:16 crc kubenswrapper[4958]: I1008 09:14:16.320322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerDied","Data":"a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a"} Oct 08 09:14:16 crc kubenswrapper[4958]: I1008 09:14:16.323668 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:14:17 crc kubenswrapper[4958]: I1008 09:14:17.374859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerStarted","Data":"06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753"} Oct 08 09:14:19 crc kubenswrapper[4958]: I1008 09:14:19.400594 4958 generic.go:334] "Generic (PLEG): container finished" podID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerID="06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753" exitCode=0 Oct 08 09:14:19 crc kubenswrapper[4958]: I1008 09:14:19.400652 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerDied","Data":"06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753"} Oct 08 09:14:21 crc kubenswrapper[4958]: I1008 09:14:21.425814 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerStarted","Data":"f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693"} Oct 08 09:14:21 crc kubenswrapper[4958]: I1008 09:14:21.458830 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xdzsj" podStartSLOduration=3.876492029 podStartE2EDuration="7.458806281s" podCreationTimestamp="2025-10-08 09:14:14 +0000 UTC" firstStartedPulling="2025-10-08 09:14:16.323368094 +0000 UTC m=+9599.453060715" lastFinishedPulling="2025-10-08 09:14:19.905682326 +0000 UTC m=+9603.035374967" observedRunningTime="2025-10-08 09:14:21.44696402 +0000 UTC m=+9604.576656641" watchObservedRunningTime="2025-10-08 09:14:21.458806281 +0000 UTC m=+9604.588498892" Oct 08 09:14:24 crc kubenswrapper[4958]: I1008 09:14:24.570094 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:24 crc kubenswrapper[4958]: I1008 09:14:24.570837 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:24 crc kubenswrapper[4958]: I1008 09:14:24.626791 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:25 crc kubenswrapper[4958]: I1008 09:14:25.518916 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:25 crc kubenswrapper[4958]: I1008 09:14:25.600476 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdzsj"] Oct 08 09:14:27 crc kubenswrapper[4958]: I1008 09:14:27.503784 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xdzsj" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="registry-server" containerID="cri-o://f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693" gracePeriod=2 Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.058864 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.116678 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-catalog-content\") pod \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.117170 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d94mb\" (UniqueName: \"kubernetes.io/projected/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-kube-api-access-d94mb\") pod \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.117288 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-utilities\") pod \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\" (UID: \"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968\") " Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.119062 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-utilities" (OuterVolumeSpecName: "utilities") pod "05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" (UID: "05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.125234 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-kube-api-access-d94mb" (OuterVolumeSpecName: "kube-api-access-d94mb") pod "05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" (UID: "05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968"). InnerVolumeSpecName "kube-api-access-d94mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.172391 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" (UID: "05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.219743 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.219785 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d94mb\" (UniqueName: \"kubernetes.io/projected/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-kube-api-access-d94mb\") on node \"crc\" DevicePath \"\"" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.219798 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.519779 4958 generic.go:334] "Generic (PLEG): container finished" podID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerID="f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693" exitCode=0 Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.519815 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerDied","Data":"f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693"} Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.519838 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdzsj" event={"ID":"05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968","Type":"ContainerDied","Data":"806b54290655ecb41c4cf5832b36fd48e12d404283836d951271a06400ad6b11"} Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.519857 4958 scope.go:117] "RemoveContainer" containerID="f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.519855 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdzsj" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.572367 4958 scope.go:117] "RemoveContainer" containerID="06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.585411 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdzsj"] Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.602576 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xdzsj"] Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.618815 4958 scope.go:117] "RemoveContainer" containerID="a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.680739 4958 scope.go:117] "RemoveContainer" containerID="f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693" Oct 08 09:14:28 crc kubenswrapper[4958]: E1008 09:14:28.685726 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693\": container with ID starting with f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693 not found: ID does not exist" containerID="f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.685800 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693"} err="failed to get container status \"f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693\": rpc error: code = NotFound desc = could not find container \"f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693\": container with ID starting with f86682ae723576035dcead354c4310eba6b9c50e0569b7b47f1c8cc74ed92693 not found: ID does not exist" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.685842 4958 scope.go:117] "RemoveContainer" containerID="06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753" Oct 08 09:14:28 crc kubenswrapper[4958]: E1008 09:14:28.686539 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753\": container with ID starting with 06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753 not found: ID does not exist" containerID="06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.686580 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753"} err="failed to get container status \"06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753\": rpc error: code = NotFound desc = could not find container \"06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753\": container with ID starting with 06da0f182ae92e9eabebafd37ebbf9cec79934b3047721f9cd0622ec5c323753 not found: ID does not exist" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.686608 4958 scope.go:117] "RemoveContainer" containerID="a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a" Oct 08 09:14:28 crc kubenswrapper[4958]: E1008 09:14:28.687276 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a\": container with ID starting with a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a not found: ID does not exist" containerID="a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a" Oct 08 09:14:28 crc kubenswrapper[4958]: I1008 09:14:28.687336 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a"} err="failed to get container status \"a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a\": rpc error: code = NotFound desc = could not find container \"a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a\": container with ID starting with a9db23b02bd1e70fabdc75f066809761026ed4619663e3b50e341215d17d1e9a not found: ID does not exist" Oct 08 09:14:29 crc kubenswrapper[4958]: I1008 09:14:29.596721 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" path="/var/lib/kubelet/pods/05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968/volumes" Oct 08 09:14:36 crc kubenswrapper[4958]: I1008 09:14:36.844989 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:14:36 crc kubenswrapper[4958]: I1008 09:14:36.845603 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:14:36 crc kubenswrapper[4958]: I1008 09:14:36.845661 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:14:36 crc kubenswrapper[4958]: I1008 09:14:36.846761 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:14:36 crc kubenswrapper[4958]: I1008 09:14:36.846854 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" gracePeriod=600 Oct 08 09:14:37 crc kubenswrapper[4958]: E1008 09:14:37.010588 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:14:37 crc kubenswrapper[4958]: I1008 09:14:37.676118 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" exitCode=0 Oct 08 09:14:37 crc kubenswrapper[4958]: I1008 09:14:37.676228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985"} Oct 08 09:14:37 crc kubenswrapper[4958]: I1008 09:14:37.676443 4958 scope.go:117] "RemoveContainer" containerID="7c26b4d67a6b0ba10f3f0000e8595e7097309485ee9d91bb9a9c7136fe40513d" Oct 08 09:14:37 crc kubenswrapper[4958]: I1008 09:14:37.678606 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:14:37 crc kubenswrapper[4958]: E1008 09:14:37.679008 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:14:48 crc kubenswrapper[4958]: I1008 09:14:48.577580 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:14:48 crc kubenswrapper[4958]: E1008 09:14:48.578468 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.153655 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8"] Oct 08 09:15:00 crc kubenswrapper[4958]: E1008 09:15:00.155109 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="extract-content" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.155135 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="extract-content" Oct 08 09:15:00 crc kubenswrapper[4958]: E1008 09:15:00.155172 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="extract-utilities" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.155186 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="extract-utilities" Oct 08 09:15:00 crc kubenswrapper[4958]: E1008 09:15:00.155219 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="registry-server" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.155231 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="registry-server" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.155611 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a3d6dc-c3e8-4fe7-a389-5ae1e5fb9968" containerName="registry-server" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.157019 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.159995 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.159995 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.164173 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8"] Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.277937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26f2c1cd-1df9-433a-acf7-7077abda1346-config-volume\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.278183 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26f2c1cd-1df9-433a-acf7-7077abda1346-secret-volume\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.278414 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fxs\" (UniqueName: \"kubernetes.io/projected/26f2c1cd-1df9-433a-acf7-7077abda1346-kube-api-access-q4fxs\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.380808 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fxs\" (UniqueName: \"kubernetes.io/projected/26f2c1cd-1df9-433a-acf7-7077abda1346-kube-api-access-q4fxs\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.380973 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26f2c1cd-1df9-433a-acf7-7077abda1346-config-volume\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.381079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26f2c1cd-1df9-433a-acf7-7077abda1346-secret-volume\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.383114 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26f2c1cd-1df9-433a-acf7-7077abda1346-config-volume\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.391132 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26f2c1cd-1df9-433a-acf7-7077abda1346-secret-volume\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.397899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fxs\" (UniqueName: \"kubernetes.io/projected/26f2c1cd-1df9-433a-acf7-7077abda1346-kube-api-access-q4fxs\") pod \"collect-profiles-29331915-59bz8\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:00 crc kubenswrapper[4958]: I1008 09:15:00.493882 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:01 crc kubenswrapper[4958]: I1008 09:15:01.011520 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8"] Oct 08 09:15:01 crc kubenswrapper[4958]: W1008 09:15:01.643284 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26f2c1cd_1df9_433a_acf7_7077abda1346.slice/crio-e5cec58e436423c4dfe2cdf67115574af2cceef47c79d486c6f929deed398c99 WatchSource:0}: Error finding container e5cec58e436423c4dfe2cdf67115574af2cceef47c79d486c6f929deed398c99: Status 404 returned error can't find the container with id e5cec58e436423c4dfe2cdf67115574af2cceef47c79d486c6f929deed398c99 Oct 08 09:15:01 crc kubenswrapper[4958]: I1008 09:15:01.978250 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" event={"ID":"26f2c1cd-1df9-433a-acf7-7077abda1346","Type":"ContainerStarted","Data":"e5cec58e436423c4dfe2cdf67115574af2cceef47c79d486c6f929deed398c99"} Oct 08 09:15:01 crc kubenswrapper[4958]: I1008 09:15:01.987903 4958 trace.go:236] Trace[735542532]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (08-Oct-2025 09:15:00.982) (total time: 1005ms): Oct 08 09:15:01 crc kubenswrapper[4958]: Trace[735542532]: [1.00543683s] [1.00543683s] END Oct 08 09:15:02 crc kubenswrapper[4958]: I1008 09:15:02.577046 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:15:02 crc kubenswrapper[4958]: E1008 09:15:02.577615 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:15:02 crc kubenswrapper[4958]: I1008 09:15:02.993384 4958 generic.go:334] "Generic (PLEG): container finished" podID="26f2c1cd-1df9-433a-acf7-7077abda1346" containerID="2b636fc70fa2b1a3ba01a1e34228179f3950b56566d10f6767cd1bc3f7590553" exitCode=0 Oct 08 09:15:02 crc kubenswrapper[4958]: I1008 09:15:02.993426 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" event={"ID":"26f2c1cd-1df9-433a-acf7-7077abda1346","Type":"ContainerDied","Data":"2b636fc70fa2b1a3ba01a1e34228179f3950b56566d10f6767cd1bc3f7590553"} Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.378379 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.479795 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fxs\" (UniqueName: \"kubernetes.io/projected/26f2c1cd-1df9-433a-acf7-7077abda1346-kube-api-access-q4fxs\") pod \"26f2c1cd-1df9-433a-acf7-7077abda1346\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.479886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26f2c1cd-1df9-433a-acf7-7077abda1346-config-volume\") pod \"26f2c1cd-1df9-433a-acf7-7077abda1346\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.480091 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26f2c1cd-1df9-433a-acf7-7077abda1346-secret-volume\") pod \"26f2c1cd-1df9-433a-acf7-7077abda1346\" (UID: \"26f2c1cd-1df9-433a-acf7-7077abda1346\") " Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.483358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f2c1cd-1df9-433a-acf7-7077abda1346-config-volume" (OuterVolumeSpecName: "config-volume") pod "26f2c1cd-1df9-433a-acf7-7077abda1346" (UID: "26f2c1cd-1df9-433a-acf7-7077abda1346"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.488572 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f2c1cd-1df9-433a-acf7-7077abda1346-kube-api-access-q4fxs" (OuterVolumeSpecName: "kube-api-access-q4fxs") pod "26f2c1cd-1df9-433a-acf7-7077abda1346" (UID: "26f2c1cd-1df9-433a-acf7-7077abda1346"). InnerVolumeSpecName "kube-api-access-q4fxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.491078 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f2c1cd-1df9-433a-acf7-7077abda1346-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26f2c1cd-1df9-433a-acf7-7077abda1346" (UID: "26f2c1cd-1df9-433a-acf7-7077abda1346"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.583706 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4fxs\" (UniqueName: \"kubernetes.io/projected/26f2c1cd-1df9-433a-acf7-7077abda1346-kube-api-access-q4fxs\") on node \"crc\" DevicePath \"\"" Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.583748 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26f2c1cd-1df9-433a-acf7-7077abda1346-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 09:15:04 crc kubenswrapper[4958]: I1008 09:15:04.583765 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26f2c1cd-1df9-433a-acf7-7077abda1346-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 09:15:05 crc kubenswrapper[4958]: I1008 09:15:05.019769 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" event={"ID":"26f2c1cd-1df9-433a-acf7-7077abda1346","Type":"ContainerDied","Data":"e5cec58e436423c4dfe2cdf67115574af2cceef47c79d486c6f929deed398c99"} Oct 08 09:15:05 crc kubenswrapper[4958]: I1008 09:15:05.019819 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5cec58e436423c4dfe2cdf67115574af2cceef47c79d486c6f929deed398c99" Oct 08 09:15:05 crc kubenswrapper[4958]: I1008 09:15:05.019863 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331915-59bz8" Oct 08 09:15:05 crc kubenswrapper[4958]: I1008 09:15:05.484242 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk"] Oct 08 09:15:05 crc kubenswrapper[4958]: I1008 09:15:05.492928 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331870-c2gtk"] Oct 08 09:15:05 crc kubenswrapper[4958]: I1008 09:15:05.605003 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324381e9-d522-445c-a944-9a69916fb000" path="/var/lib/kubelet/pods/324381e9-d522-445c-a944-9a69916fb000/volumes" Oct 08 09:15:15 crc kubenswrapper[4958]: I1008 09:15:15.577420 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:15:15 crc kubenswrapper[4958]: E1008 09:15:15.578642 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:15:30 crc kubenswrapper[4958]: I1008 09:15:30.576719 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:15:30 crc kubenswrapper[4958]: E1008 09:15:30.577555 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:15:45 crc kubenswrapper[4958]: I1008 09:15:45.576534 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:15:45 crc kubenswrapper[4958]: E1008 09:15:45.577899 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:15:49 crc kubenswrapper[4958]: I1008 09:15:49.860691 4958 scope.go:117] "RemoveContainer" containerID="5a737a9e373621adecae6251c5639882b1040324a9390aa11a37240a510146c3" Oct 08 09:16:00 crc kubenswrapper[4958]: I1008 09:16:00.576531 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:16:00 crc kubenswrapper[4958]: E1008 09:16:00.578080 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:16:14 crc kubenswrapper[4958]: I1008 09:16:14.577012 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:16:14 crc kubenswrapper[4958]: E1008 09:16:14.577845 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:16:18 crc kubenswrapper[4958]: I1008 09:16:18.968021 4958 generic.go:334] "Generic (PLEG): container finished" podID="bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" containerID="802609d13650914bde61400ad2634f2fb1df38d0235492de07d7814b70dcecdc" exitCode=0 Oct 08 09:16:18 crc kubenswrapper[4958]: I1008 09:16:18.968156 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" event={"ID":"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4","Type":"ContainerDied","Data":"802609d13650914bde61400ad2634f2fb1df38d0235492de07d7814b70dcecdc"} Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.479101 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.604610 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-ssh-key\") pod \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.604716 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-agent-neutron-config-0\") pod \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.604786 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fm6m\" (UniqueName: \"kubernetes.io/projected/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-kube-api-access-5fm6m\") pod \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.604898 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-inventory\") pod \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.605022 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-combined-ca-bundle\") pod \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\" (UID: \"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4\") " Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.610471 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-kube-api-access-5fm6m" (OuterVolumeSpecName: "kube-api-access-5fm6m") pod "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" (UID: "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4"). InnerVolumeSpecName "kube-api-access-5fm6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.612348 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" (UID: "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.644598 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-inventory" (OuterVolumeSpecName: "inventory") pod "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" (UID: "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.648089 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" (UID: "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.649442 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" (UID: "bda6e9bf-819f-47bc-b19c-06c5aea5f9d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.707666 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.707714 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.707729 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fm6m\" (UniqueName: \"kubernetes.io/projected/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-kube-api-access-5fm6m\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.707741 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.707755 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda6e9bf-819f-47bc-b19c-06c5aea5f9d4-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.997075 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" event={"ID":"bda6e9bf-819f-47bc-b19c-06c5aea5f9d4","Type":"ContainerDied","Data":"8f7e1a983fb79607ec8d6be399c9d5fb72dbdde43b2c3849c248caa4677a610f"} Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.997495 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7e1a983fb79607ec8d6be399c9d5fb72dbdde43b2c3849c248caa4677a610f" Oct 08 09:16:20 crc kubenswrapper[4958]: I1008 09:16:20.997653 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xsjwr" Oct 08 09:16:29 crc kubenswrapper[4958]: I1008 09:16:29.577637 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:16:29 crc kubenswrapper[4958]: E1008 09:16:29.578571 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:16:44 crc kubenswrapper[4958]: I1008 09:16:44.577188 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:16:44 crc kubenswrapper[4958]: E1008 09:16:44.577991 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:16:48 crc kubenswrapper[4958]: I1008 09:16:48.655574 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 09:16:48 crc kubenswrapper[4958]: I1008 09:16:48.656467 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e9c47799-3381-43c2-85bd-d8e7c454560e" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" gracePeriod=30 Oct 08 09:16:48 crc kubenswrapper[4958]: I1008 09:16:48.699621 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 09:16:48 crc kubenswrapper[4958]: I1008 09:16:48.699865 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="873281d6-2822-4443-9128-6db80f5440b1" containerName="nova-cell1-conductor-conductor" containerID="cri-o://1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b" gracePeriod=30 Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.377383 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.377898 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-log" containerID="cri-o://6c2bd6d62428e0f02518d8f61d0cc5461ff70e202877fef55a1c964bdfe0607b" gracePeriod=30 Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.377988 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-api" containerID="cri-o://a771664f578cd8a5189ef2b9541adf0b97613f1b9c7f52da4270ef3bb98aca34" gracePeriod=30 Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.395746 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.395978 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" containerName="nova-scheduler-scheduler" containerID="cri-o://995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" gracePeriod=30 Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.419161 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.419396 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-log" containerID="cri-o://fcb570eb52f626f2f6b6bf7efa28480231191670e8a574f4e187eb8d05973309" gracePeriod=30 Oct 08 09:16:49 crc kubenswrapper[4958]: I1008 09:16:49.419445 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-metadata" containerID="cri-o://c8fb2047104a4b75db431fb5cc9924b0d5720b75915ef0ac326799e5e137b4b6" gracePeriod=30 Oct 08 09:16:49 crc kubenswrapper[4958]: E1008 09:16:49.648812 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 09:16:49 crc kubenswrapper[4958]: E1008 09:16:49.650229 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 09:16:49 crc kubenswrapper[4958]: E1008 09:16:49.652146 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Oct 08 09:16:49 crc kubenswrapper[4958]: E1008 09:16:49.652192 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e9c47799-3381-43c2-85bd-d8e7c454560e" containerName="nova-cell0-conductor-conductor" Oct 08 09:16:50 crc kubenswrapper[4958]: I1008 09:16:50.346081 4958 generic.go:334] "Generic (PLEG): container finished" podID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerID="fcb570eb52f626f2f6b6bf7efa28480231191670e8a574f4e187eb8d05973309" exitCode=143 Oct 08 09:16:50 crc kubenswrapper[4958]: I1008 09:16:50.346161 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65f973da-e0ee-44b0-8a30-4ae4540d1642","Type":"ContainerDied","Data":"fcb570eb52f626f2f6b6bf7efa28480231191670e8a574f4e187eb8d05973309"} Oct 08 09:16:50 crc kubenswrapper[4958]: I1008 09:16:50.349382 4958 generic.go:334] "Generic (PLEG): container finished" podID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerID="6c2bd6d62428e0f02518d8f61d0cc5461ff70e202877fef55a1c964bdfe0607b" exitCode=143 Oct 08 09:16:50 crc kubenswrapper[4958]: I1008 09:16:50.349424 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db1ceb64-6b38-468e-947a-77dfd6d79194","Type":"ContainerDied","Data":"6c2bd6d62428e0f02518d8f61d0cc5461ff70e202877fef55a1c964bdfe0607b"} Oct 08 09:16:50 crc kubenswrapper[4958]: E1008 09:16:50.460094 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 09:16:50 crc kubenswrapper[4958]: E1008 09:16:50.462086 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 09:16:50 crc kubenswrapper[4958]: E1008 09:16:50.465905 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 08 09:16:50 crc kubenswrapper[4958]: E1008 09:16:50.465984 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" containerName="nova-scheduler-scheduler" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.257022 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.358341 4958 generic.go:334] "Generic (PLEG): container finished" podID="873281d6-2822-4443-9128-6db80f5440b1" containerID="1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b" exitCode=0 Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.358378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"873281d6-2822-4443-9128-6db80f5440b1","Type":"ContainerDied","Data":"1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b"} Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.358400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"873281d6-2822-4443-9128-6db80f5440b1","Type":"ContainerDied","Data":"b2ad372ea1469b5d4661083d9efef77d7590b3805197a3ccb7ef9771a349fcc6"} Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.358415 4958 scope.go:117] "RemoveContainer" containerID="1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.358524 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.382692 4958 scope.go:117] "RemoveContainer" containerID="1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b" Oct 08 09:16:51 crc kubenswrapper[4958]: E1008 09:16:51.383208 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b\": container with ID starting with 1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b not found: ID does not exist" containerID="1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.383245 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b"} err="failed to get container status \"1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b\": rpc error: code = NotFound desc = could not find container \"1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b\": container with ID starting with 1fa7eab594c1ddecd70837bcb0733788d3612c6549c8917a5986f50902eba99b not found: ID does not exist" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.386485 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-combined-ca-bundle\") pod \"873281d6-2822-4443-9128-6db80f5440b1\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.386602 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx2cn\" (UniqueName: \"kubernetes.io/projected/873281d6-2822-4443-9128-6db80f5440b1-kube-api-access-sx2cn\") pod \"873281d6-2822-4443-9128-6db80f5440b1\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.386667 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-config-data\") pod \"873281d6-2822-4443-9128-6db80f5440b1\" (UID: \"873281d6-2822-4443-9128-6db80f5440b1\") " Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.392829 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873281d6-2822-4443-9128-6db80f5440b1-kube-api-access-sx2cn" (OuterVolumeSpecName: "kube-api-access-sx2cn") pod "873281d6-2822-4443-9128-6db80f5440b1" (UID: "873281d6-2822-4443-9128-6db80f5440b1"). InnerVolumeSpecName "kube-api-access-sx2cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.421793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "873281d6-2822-4443-9128-6db80f5440b1" (UID: "873281d6-2822-4443-9128-6db80f5440b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.436252 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-config-data" (OuterVolumeSpecName: "config-data") pod "873281d6-2822-4443-9128-6db80f5440b1" (UID: "873281d6-2822-4443-9128-6db80f5440b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.489338 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.489371 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx2cn\" (UniqueName: \"kubernetes.io/projected/873281d6-2822-4443-9128-6db80f5440b1-kube-api-access-sx2cn\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.489382 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873281d6-2822-4443-9128-6db80f5440b1-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.678610 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.686742 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.698321 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 09:16:51 crc kubenswrapper[4958]: E1008 09:16:51.698712 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.698728 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 08 09:16:51 crc kubenswrapper[4958]: E1008 09:16:51.698757 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f2c1cd-1df9-433a-acf7-7077abda1346" containerName="collect-profiles" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.698763 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f2c1cd-1df9-433a-acf7-7077abda1346" containerName="collect-profiles" Oct 08 09:16:51 crc kubenswrapper[4958]: E1008 09:16:51.698786 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873281d6-2822-4443-9128-6db80f5440b1" containerName="nova-cell1-conductor-conductor" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.698793 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="873281d6-2822-4443-9128-6db80f5440b1" containerName="nova-cell1-conductor-conductor" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.698987 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="873281d6-2822-4443-9128-6db80f5440b1" containerName="nova-cell1-conductor-conductor" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.699010 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda6e9bf-819f-47bc-b19c-06c5aea5f9d4" containerName="neutron-dhcp-openstack-openstack-cell1" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.699020 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f2c1cd-1df9-433a-acf7-7077abda1346" containerName="collect-profiles" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.699710 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.702936 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.731470 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.795424 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d09715-fb73-466c-af3c-ca0d1aca203f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.795574 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swzrn\" (UniqueName: \"kubernetes.io/projected/92d09715-fb73-466c-af3c-ca0d1aca203f-kube-api-access-swzrn\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.795604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d09715-fb73-466c-af3c-ca0d1aca203f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.897497 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d09715-fb73-466c-af3c-ca0d1aca203f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.897898 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swzrn\" (UniqueName: \"kubernetes.io/projected/92d09715-fb73-466c-af3c-ca0d1aca203f-kube-api-access-swzrn\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.897926 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d09715-fb73-466c-af3c-ca0d1aca203f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.902467 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d09715-fb73-466c-af3c-ca0d1aca203f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.905563 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d09715-fb73-466c-af3c-ca0d1aca203f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:51 crc kubenswrapper[4958]: I1008 09:16:51.916178 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swzrn\" (UniqueName: \"kubernetes.io/projected/92d09715-fb73-466c-af3c-ca0d1aca203f-kube-api-access-swzrn\") pod \"nova-cell1-conductor-0\" (UID: \"92d09715-fb73-466c-af3c-ca0d1aca203f\") " pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:52 crc kubenswrapper[4958]: I1008 09:16:52.014450 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:52 crc kubenswrapper[4958]: I1008 09:16:52.564395 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 08 09:16:52 crc kubenswrapper[4958]: I1008 09:16:52.577851 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.95:8775/\": read tcp 10.217.0.2:43374->10.217.1.95:8775: read: connection reset by peer" Oct 08 09:16:52 crc kubenswrapper[4958]: I1008 09:16:52.577932 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.95:8775/\": read tcp 10.217.0.2:43360->10.217.1.95:8775: read: connection reset by peer" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.389825 4958 generic.go:334] "Generic (PLEG): container finished" podID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerID="a771664f578cd8a5189ef2b9541adf0b97613f1b9c7f52da4270ef3bb98aca34" exitCode=0 Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.390746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db1ceb64-6b38-468e-947a-77dfd6d79194","Type":"ContainerDied","Data":"a771664f578cd8a5189ef2b9541adf0b97613f1b9c7f52da4270ef3bb98aca34"} Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.391861 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"92d09715-fb73-466c-af3c-ca0d1aca203f","Type":"ContainerStarted","Data":"be450ce4ecb905f8d5b5a08ca78d1f7f6337d13a7601d0d46b4b3fdbfe56d9fa"} Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.393449 4958 generic.go:334] "Generic (PLEG): container finished" podID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerID="c8fb2047104a4b75db431fb5cc9924b0d5720b75915ef0ac326799e5e137b4b6" exitCode=0 Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.393487 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65f973da-e0ee-44b0-8a30-4ae4540d1642","Type":"ContainerDied","Data":"c8fb2047104a4b75db431fb5cc9924b0d5720b75915ef0ac326799e5e137b4b6"} Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.591808 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873281d6-2822-4443-9128-6db80f5440b1" path="/var/lib/kubelet/pods/873281d6-2822-4443-9128-6db80f5440b1/volumes" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.734346 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.743050 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcrcm\" (UniqueName: \"kubernetes.io/projected/65f973da-e0ee-44b0-8a30-4ae4540d1642-kube-api-access-lcrcm\") pod \"65f973da-e0ee-44b0-8a30-4ae4540d1642\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854600 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-config-data\") pod \"65f973da-e0ee-44b0-8a30-4ae4540d1642\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854645 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-public-tls-certs\") pod \"db1ceb64-6b38-468e-947a-77dfd6d79194\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-config-data\") pod \"db1ceb64-6b38-468e-947a-77dfd6d79194\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854747 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-nova-metadata-tls-certs\") pod \"65f973da-e0ee-44b0-8a30-4ae4540d1642\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db1ceb64-6b38-468e-947a-77dfd6d79194-logs\") pod \"db1ceb64-6b38-468e-947a-77dfd6d79194\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854838 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tm6s\" (UniqueName: \"kubernetes.io/projected/db1ceb64-6b38-468e-947a-77dfd6d79194-kube-api-access-2tm6s\") pod \"db1ceb64-6b38-468e-947a-77dfd6d79194\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854892 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-internal-tls-certs\") pod \"db1ceb64-6b38-468e-947a-77dfd6d79194\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.854966 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-combined-ca-bundle\") pod \"db1ceb64-6b38-468e-947a-77dfd6d79194\" (UID: \"db1ceb64-6b38-468e-947a-77dfd6d79194\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.855002 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-combined-ca-bundle\") pod \"65f973da-e0ee-44b0-8a30-4ae4540d1642\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.855017 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f973da-e0ee-44b0-8a30-4ae4540d1642-logs\") pod \"65f973da-e0ee-44b0-8a30-4ae4540d1642\" (UID: \"65f973da-e0ee-44b0-8a30-4ae4540d1642\") " Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.856826 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f973da-e0ee-44b0-8a30-4ae4540d1642-logs" (OuterVolumeSpecName: "logs") pod "65f973da-e0ee-44b0-8a30-4ae4540d1642" (UID: "65f973da-e0ee-44b0-8a30-4ae4540d1642"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.857082 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1ceb64-6b38-468e-947a-77dfd6d79194-logs" (OuterVolumeSpecName: "logs") pod "db1ceb64-6b38-468e-947a-77dfd6d79194" (UID: "db1ceb64-6b38-468e-947a-77dfd6d79194"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.862413 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f973da-e0ee-44b0-8a30-4ae4540d1642-kube-api-access-lcrcm" (OuterVolumeSpecName: "kube-api-access-lcrcm") pod "65f973da-e0ee-44b0-8a30-4ae4540d1642" (UID: "65f973da-e0ee-44b0-8a30-4ae4540d1642"). InnerVolumeSpecName "kube-api-access-lcrcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.865217 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1ceb64-6b38-468e-947a-77dfd6d79194-kube-api-access-2tm6s" (OuterVolumeSpecName: "kube-api-access-2tm6s") pod "db1ceb64-6b38-468e-947a-77dfd6d79194" (UID: "db1ceb64-6b38-468e-947a-77dfd6d79194"). InnerVolumeSpecName "kube-api-access-2tm6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.919050 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-config-data" (OuterVolumeSpecName: "config-data") pod "65f973da-e0ee-44b0-8a30-4ae4540d1642" (UID: "65f973da-e0ee-44b0-8a30-4ae4540d1642"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.958777 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.958805 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db1ceb64-6b38-468e-947a-77dfd6d79194-logs\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.958815 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tm6s\" (UniqueName: \"kubernetes.io/projected/db1ceb64-6b38-468e-947a-77dfd6d79194-kube-api-access-2tm6s\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.958823 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65f973da-e0ee-44b0-8a30-4ae4540d1642-logs\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.958832 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcrcm\" (UniqueName: \"kubernetes.io/projected/65f973da-e0ee-44b0-8a30-4ae4540d1642-kube-api-access-lcrcm\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.974793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-config-data" (OuterVolumeSpecName: "config-data") pod "db1ceb64-6b38-468e-947a-77dfd6d79194" (UID: "db1ceb64-6b38-468e-947a-77dfd6d79194"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.975880 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db1ceb64-6b38-468e-947a-77dfd6d79194" (UID: "db1ceb64-6b38-468e-947a-77dfd6d79194"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.976698 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db1ceb64-6b38-468e-947a-77dfd6d79194" (UID: "db1ceb64-6b38-468e-947a-77dfd6d79194"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:53 crc kubenswrapper[4958]: I1008 09:16:53.986620 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65f973da-e0ee-44b0-8a30-4ae4540d1642" (UID: "65f973da-e0ee-44b0-8a30-4ae4540d1642"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.003255 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db1ceb64-6b38-468e-947a-77dfd6d79194" (UID: "db1ceb64-6b38-468e-947a-77dfd6d79194"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.010738 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "65f973da-e0ee-44b0-8a30-4ae4540d1642" (UID: "65f973da-e0ee-44b0-8a30-4ae4540d1642"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.048690 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.060546 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.060588 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.060600 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.060608 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.060616 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1ceb64-6b38-468e-947a-77dfd6d79194-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.060626 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f973da-e0ee-44b0-8a30-4ae4540d1642-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066129 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr"] Oct 08 09:16:54 crc kubenswrapper[4958]: E1008 09:16:54.066571 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-metadata" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066607 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-metadata" Oct 08 09:16:54 crc kubenswrapper[4958]: E1008 09:16:54.066621 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-api" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066628 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-api" Oct 08 09:16:54 crc kubenswrapper[4958]: E1008 09:16:54.066647 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-log" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066653 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-log" Oct 08 09:16:54 crc kubenswrapper[4958]: E1008 09:16:54.066674 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c47799-3381-43c2-85bd-d8e7c454560e" containerName="nova-cell0-conductor-conductor" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066679 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c47799-3381-43c2-85bd-d8e7c454560e" containerName="nova-cell0-conductor-conductor" Oct 08 09:16:54 crc kubenswrapper[4958]: E1008 09:16:54.066701 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-log" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066707 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-log" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066892 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c47799-3381-43c2-85bd-d8e7c454560e" containerName="nova-cell0-conductor-conductor" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066906 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-log" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066920 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-metadata" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066933 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" containerName="nova-metadata-log" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.066961 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" containerName="nova-api-api" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.067743 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.092296 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.092516 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-7n4bn" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.096787 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.096860 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.097637 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.098181 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.100112 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.115117 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.162605 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-combined-ca-bundle\") pod \"e9c47799-3381-43c2-85bd-d8e7c454560e\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.162749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mpg\" (UniqueName: \"kubernetes.io/projected/e9c47799-3381-43c2-85bd-d8e7c454560e-kube-api-access-88mpg\") pod \"e9c47799-3381-43c2-85bd-d8e7c454560e\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.163309 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-config-data\") pod \"e9c47799-3381-43c2-85bd-d8e7c454560e\" (UID: \"e9c47799-3381-43c2-85bd-d8e7c454560e\") " Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.163705 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.163744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.163765 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.163792 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5czwc\" (UniqueName: \"kubernetes.io/projected/093ebdbf-a8f2-422f-ad7f-a7893ea25990-kube-api-access-5czwc\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.164041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.164116 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.164586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.164795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.165004 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.167457 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c47799-3381-43c2-85bd-d8e7c454560e-kube-api-access-88mpg" (OuterVolumeSpecName: "kube-api-access-88mpg") pod "e9c47799-3381-43c2-85bd-d8e7c454560e" (UID: "e9c47799-3381-43c2-85bd-d8e7c454560e"). InnerVolumeSpecName "kube-api-access-88mpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.200873 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9c47799-3381-43c2-85bd-d8e7c454560e" (UID: "e9c47799-3381-43c2-85bd-d8e7c454560e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.201411 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-config-data" (OuterVolumeSpecName: "config-data") pod "e9c47799-3381-43c2-85bd-d8e7c454560e" (UID: "e9c47799-3381-43c2-85bd-d8e7c454560e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268516 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268676 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5czwc\" (UniqueName: \"kubernetes.io/projected/093ebdbf-a8f2-422f-ad7f-a7893ea25990-kube-api-access-5czwc\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268730 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268810 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268830 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c47799-3381-43c2-85bd-d8e7c454560e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.268845 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88mpg\" (UniqueName: \"kubernetes.io/projected/e9c47799-3381-43c2-85bd-d8e7c454560e-kube-api-access-88mpg\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.272886 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.273535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.273719 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.276790 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.277200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.277669 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.277684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.280048 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.286808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5czwc\" (UniqueName: \"kubernetes.io/projected/093ebdbf-a8f2-422f-ad7f-a7893ea25990-kube-api-access-5czwc\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.402117 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.409764 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db1ceb64-6b38-468e-947a-77dfd6d79194","Type":"ContainerDied","Data":"d22b53b5c6b3a07109ee42de6a739011baabcaaf29a74ac021231cbe1fbbdf65"} Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.409842 4958 scope.go:117] "RemoveContainer" containerID="a771664f578cd8a5189ef2b9541adf0b97613f1b9c7f52da4270ef3bb98aca34" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.410023 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.428451 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"92d09715-fb73-466c-af3c-ca0d1aca203f","Type":"ContainerStarted","Data":"7cdecd13566d8254c1221db62ea293cc3526eb802ec5ab3390c422b6c09f51ed"} Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.428516 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.431811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65f973da-e0ee-44b0-8a30-4ae4540d1642","Type":"ContainerDied","Data":"9e3f9e4b4bfb82f18ea61b080bbe59ec12f1c1e49071a76ad8ff0563bd427922"} Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.431898 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.438505 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9c47799-3381-43c2-85bd-d8e7c454560e" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" exitCode=0 Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.438546 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9c47799-3381-43c2-85bd-d8e7c454560e","Type":"ContainerDied","Data":"7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8"} Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.438570 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e9c47799-3381-43c2-85bd-d8e7c454560e","Type":"ContainerDied","Data":"1469e3f5f6a2acb3b56d487fa27033e78780c6d086f11bab0d7a13960aaf5f8f"} Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.438619 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.448483 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.448450582 podStartE2EDuration="3.448450582s" podCreationTimestamp="2025-10-08 09:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:16:54.445160943 +0000 UTC m=+9757.574853554" watchObservedRunningTime="2025-10-08 09:16:54.448450582 +0000 UTC m=+9757.578143203" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.620537 4958 scope.go:117] "RemoveContainer" containerID="6c2bd6d62428e0f02518d8f61d0cc5461ff70e202877fef55a1c964bdfe0607b" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.624854 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.668166 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.715053 4958 scope.go:117] "RemoveContainer" containerID="c8fb2047104a4b75db431fb5cc9924b0d5720b75915ef0ac326799e5e137b4b6" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.720308 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.734044 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.735914 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.744400 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.744618 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.744817 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.754472 4958 scope.go:117] "RemoveContainer" containerID="fcb570eb52f626f2f6b6bf7efa28480231191670e8a574f4e187eb8d05973309" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.772599 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.785795 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.787965 4958 scope.go:117] "RemoveContainer" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.801409 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.820961 4958 scope.go:117] "RemoveContainer" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" Oct 08 09:16:54 crc kubenswrapper[4958]: E1008 09:16:54.821507 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8\": container with ID starting with 7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8 not found: ID does not exist" containerID="7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.821551 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8"} err="failed to get container status \"7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8\": rpc error: code = NotFound desc = could not find container \"7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8\": container with ID starting with 7d3201d5279e8464556cbfeb9331c92b4d4d845fb06fff694051383e7e1a27c8 not found: ID does not exist" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.823361 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.824922 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.826336 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.833749 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.842719 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.852137 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.854635 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.857668 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.858022 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.864472 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.884572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.884743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.884875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqv2\" (UniqueName: \"kubernetes.io/projected/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-kube-api-access-jvqv2\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.884908 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-public-tls-certs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.885054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-config-data\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.885083 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-logs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.915878 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.987044 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-combined-ca-bundle\") pod \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.987147 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgtsm\" (UniqueName: \"kubernetes.io/projected/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-kube-api-access-bgtsm\") pod \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.987329 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-config-data\") pod \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\" (UID: \"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473\") " Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.987685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82xr\" (UniqueName: \"kubernetes.io/projected/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-kube-api-access-x82xr\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.987801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.987879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.988601 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqv2\" (UniqueName: \"kubernetes.io/projected/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-kube-api-access-jvqv2\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.988631 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-public-tls-certs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.988702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.988762 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-config-data\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.988801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqx8\" (UniqueName: \"kubernetes.io/projected/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-kube-api-access-zgqx8\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.988826 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-logs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.989008 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-logs\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.989065 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-config-data\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.989245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.989286 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.989373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.989417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-logs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.991606 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-kube-api-access-bgtsm" (OuterVolumeSpecName: "kube-api-access-bgtsm") pod "b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" (UID: "b4d1d4c1-82c0-4fd0-b97c-4fff7269e473"). InnerVolumeSpecName "kube-api-access-bgtsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.992079 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.992817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.993083 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-config-data\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:54 crc kubenswrapper[4958]: I1008 09:16:54.995472 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-public-tls-certs\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.011928 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqv2\" (UniqueName: \"kubernetes.io/projected/6361d6b6-9daa-426e-9fad-bfb1e4cf53ec-kube-api-access-jvqv2\") pod \"nova-api-0\" (UID: \"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec\") " pod="openstack/nova-api-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.021763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-config-data" (OuterVolumeSpecName: "config-data") pod "b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" (UID: "b4d1d4c1-82c0-4fd0-b97c-4fff7269e473"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.036819 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" (UID: "b4d1d4c1-82c0-4fd0-b97c-4fff7269e473"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.069142 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091783 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgqx8\" (UniqueName: \"kubernetes.io/projected/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-kube-api-access-zgqx8\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-logs\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091888 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-config-data\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091936 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091974 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.091998 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.092034 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82xr\" (UniqueName: \"kubernetes.io/projected/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-kube-api-access-x82xr\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.092095 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-config-data\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.092110 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.092119 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgtsm\" (UniqueName: \"kubernetes.io/projected/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473-kube-api-access-bgtsm\") on node \"crc\" DevicePath \"\"" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.093604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-logs\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.098173 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.099170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-config-data\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.100778 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.101005 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.101668 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.107909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82xr\" (UniqueName: \"kubernetes.io/projected/a42f13a8-39ee-400a-8bfa-1a3c9eeb739f-kube-api-access-x82xr\") pod \"nova-metadata-0\" (UID: \"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f\") " pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.111360 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgqx8\" (UniqueName: \"kubernetes.io/projected/f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c-kube-api-access-zgqx8\") pod \"nova-cell0-conductor-0\" (UID: \"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c\") " pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.112768 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr"] Oct 08 09:16:55 crc kubenswrapper[4958]: W1008 09:16:55.112780 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod093ebdbf_a8f2_422f_ad7f_a7893ea25990.slice/crio-78aabd4ef6c7ee55ab8c26a1ac5352aacce226fc6e2add903f1258b877a13ac0 WatchSource:0}: Error finding container 78aabd4ef6c7ee55ab8c26a1ac5352aacce226fc6e2add903f1258b877a13ac0: Status 404 returned error can't find the container with id 78aabd4ef6c7ee55ab8c26a1ac5352aacce226fc6e2add903f1258b877a13ac0 Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.147937 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.233486 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.461205 4958 generic.go:334] "Generic (PLEG): container finished" podID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" exitCode=0 Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.461313 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.467038 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473","Type":"ContainerDied","Data":"995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b"} Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.467110 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4d1d4c1-82c0-4fd0-b97c-4fff7269e473","Type":"ContainerDied","Data":"6f0b84d50a5d62efe4586337ebc3c80d55dfadd9c4303a4d67208131d407ae7b"} Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.467139 4958 scope.go:117] "RemoveContainer" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.471097 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" event={"ID":"093ebdbf-a8f2-422f-ad7f-a7893ea25990","Type":"ContainerStarted","Data":"78aabd4ef6c7ee55ab8c26a1ac5352aacce226fc6e2add903f1258b877a13ac0"} Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.503984 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.522727 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.530682 4958 scope.go:117] "RemoveContainer" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" Oct 08 09:16:55 crc kubenswrapper[4958]: E1008 09:16:55.533155 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b\": container with ID starting with 995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b not found: ID does not exist" containerID="995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.533206 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b"} err="failed to get container status \"995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b\": rpc error: code = NotFound desc = could not find container \"995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b\": container with ID starting with 995b972dc46274349f698466b6ed6e3621787393eb30d469f95708b00d76b18b not found: ID does not exist" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.542534 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: E1008 09:16:55.543125 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" containerName="nova-scheduler-scheduler" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.543148 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" containerName="nova-scheduler-scheduler" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.543428 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" containerName="nova-scheduler-scheduler" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.544691 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.549834 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.564908 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.581482 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:16:55 crc kubenswrapper[4958]: E1008 09:16:55.581735 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.602937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpll\" (UniqueName: \"kubernetes.io/projected/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-kube-api-access-jzpll\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.603384 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-config-data\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.603525 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.604048 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f973da-e0ee-44b0-8a30-4ae4540d1642" path="/var/lib/kubelet/pods/65f973da-e0ee-44b0-8a30-4ae4540d1642/volumes" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.605366 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d1d4c1-82c0-4fd0-b97c-4fff7269e473" path="/var/lib/kubelet/pods/b4d1d4c1-82c0-4fd0-b97c-4fff7269e473/volumes" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.606084 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1ceb64-6b38-468e-947a-77dfd6d79194" path="/var/lib/kubelet/pods/db1ceb64-6b38-468e-947a-77dfd6d79194/volumes" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.607648 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c47799-3381-43c2-85bd-d8e7c454560e" path="/var/lib/kubelet/pods/e9c47799-3381-43c2-85bd-d8e7c454560e/volumes" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.608205 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.654709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: W1008 09:16:55.660989 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7af0860_a4f4_4d5b_9c7c_a9df3ab6d44c.slice/crio-7f4ed21185adaeb120e261b63ab786d02ac62eeb0653d2522bd94a8f3964aeb5 WatchSource:0}: Error finding container 7f4ed21185adaeb120e261b63ab786d02ac62eeb0653d2522bd94a8f3964aeb5: Status 404 returned error can't find the container with id 7f4ed21185adaeb120e261b63ab786d02ac62eeb0653d2522bd94a8f3964aeb5 Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.706332 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-config-data\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.706526 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.706567 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzpll\" (UniqueName: \"kubernetes.io/projected/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-kube-api-access-jzpll\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.724569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzpll\" (UniqueName: \"kubernetes.io/projected/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-kube-api-access-jzpll\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.724618 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-config-data\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.725069 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a3c0020-8fcd-4c3e-a8c7-226f10f5373e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e\") " pod="openstack/nova-scheduler-0" Oct 08 09:16:55 crc kubenswrapper[4958]: I1008 09:16:55.775074 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 08 09:16:55 crc kubenswrapper[4958]: W1008 09:16:55.786396 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda42f13a8_39ee_400a_8bfa_1a3c9eeb739f.slice/crio-fea8ac767dd53cc6dad0b54d734dd06ecd6f7b26cb2349975fa563288b1009d1 WatchSource:0}: Error finding container fea8ac767dd53cc6dad0b54d734dd06ecd6f7b26cb2349975fa563288b1009d1: Status 404 returned error can't find the container with id fea8ac767dd53cc6dad0b54d734dd06ecd6f7b26cb2349975fa563288b1009d1 Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.016575 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.481550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec","Type":"ContainerStarted","Data":"531edaa5ecb7ee76f0447b18391c04f780438211bcd428e9763491f6a07f4a4e"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.481907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec","Type":"ContainerStarted","Data":"7468138977fbfd0c225a40dcaef6e5f5eb92788e182fba24b831f5e99a58191d"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.481921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6361d6b6-9daa-426e-9fad-bfb1e4cf53ec","Type":"ContainerStarted","Data":"81556581cdb86066fadcfd4b564f90f96bfc236ee803734872a3408531eccb62"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.483905 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f","Type":"ContainerStarted","Data":"d4c0c5488b6abe6fccf8bc0dbe9eb3c2b1e4aeefff74fbe6f21999753e940bc5"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.483926 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f","Type":"ContainerStarted","Data":"38ada00bc7565bd3e7fc16d48958bb57f72748f8b3b74706440b41c46ebc495f"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.483973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a42f13a8-39ee-400a-8bfa-1a3c9eeb739f","Type":"ContainerStarted","Data":"fea8ac767dd53cc6dad0b54d734dd06ecd6f7b26cb2349975fa563288b1009d1"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.485508 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" event={"ID":"093ebdbf-a8f2-422f-ad7f-a7893ea25990","Type":"ContainerStarted","Data":"c91fcfadd3f705c0299381097a8ced17ded75fa5c14b44eda0b89cc861e979e1"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.486934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c","Type":"ContainerStarted","Data":"802c1796c2f98179264858131d8cdca10e6e9b25d71fae4999387289e4cbabcd"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.486969 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c","Type":"ContainerStarted","Data":"7f4ed21185adaeb120e261b63ab786d02ac62eeb0653d2522bd94a8f3964aeb5"} Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.487639 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.503354 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.503336861 podStartE2EDuration="2.503336861s" podCreationTimestamp="2025-10-08 09:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:16:56.497311298 +0000 UTC m=+9759.627003889" watchObservedRunningTime="2025-10-08 09:16:56.503336861 +0000 UTC m=+9759.633029462" Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.529216 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.529193044 podStartE2EDuration="2.529193044s" podCreationTimestamp="2025-10-08 09:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:16:56.521572657 +0000 UTC m=+9759.651265258" watchObservedRunningTime="2025-10-08 09:16:56.529193044 +0000 UTC m=+9759.658885665" Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.557756 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.573185 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" podStartSLOduration=2.080349323 podStartE2EDuration="2.573158338s" podCreationTimestamp="2025-10-08 09:16:54 +0000 UTC" firstStartedPulling="2025-10-08 09:16:55.115354165 +0000 UTC m=+9758.245046766" lastFinishedPulling="2025-10-08 09:16:55.60816318 +0000 UTC m=+9758.737855781" observedRunningTime="2025-10-08 09:16:56.536312247 +0000 UTC m=+9759.666004848" watchObservedRunningTime="2025-10-08 09:16:56.573158338 +0000 UTC m=+9759.702850939" Oct 08 09:16:56 crc kubenswrapper[4958]: I1008 09:16:56.581444 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.581419162 podStartE2EDuration="2.581419162s" podCreationTimestamp="2025-10-08 09:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:16:56.562349654 +0000 UTC m=+9759.692042255" watchObservedRunningTime="2025-10-08 09:16:56.581419162 +0000 UTC m=+9759.711111763" Oct 08 09:16:57 crc kubenswrapper[4958]: I1008 09:16:57.507615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e","Type":"ContainerStarted","Data":"545fdc8d0999bb9827c12c33b148548d34906c7559e47360391892ee02fa261b"} Oct 08 09:16:57 crc kubenswrapper[4958]: I1008 09:16:57.508328 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a3c0020-8fcd-4c3e-a8c7-226f10f5373e","Type":"ContainerStarted","Data":"cfae895d9d15ea8edb90b626544b4cc236d62d679f165daf7a07e272a34d5bbe"} Oct 08 09:16:57 crc kubenswrapper[4958]: I1008 09:16:57.569883 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.569861188 podStartE2EDuration="2.569861188s" podCreationTimestamp="2025-10-08 09:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:16:57.554730977 +0000 UTC m=+9760.684423578" watchObservedRunningTime="2025-10-08 09:16:57.569861188 +0000 UTC m=+9760.699553799" Oct 08 09:17:00 crc kubenswrapper[4958]: I1008 09:17:00.183582 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 08 09:17:00 crc kubenswrapper[4958]: I1008 09:17:00.236294 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 09:17:00 crc kubenswrapper[4958]: I1008 09:17:00.236402 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 08 09:17:01 crc kubenswrapper[4958]: I1008 09:17:01.016714 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 08 09:17:02 crc kubenswrapper[4958]: I1008 09:17:02.366674 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 08 09:17:05 crc kubenswrapper[4958]: I1008 09:17:05.070400 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 09:17:05 crc kubenswrapper[4958]: I1008 09:17:05.071205 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 08 09:17:05 crc kubenswrapper[4958]: I1008 09:17:05.236577 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 09:17:05 crc kubenswrapper[4958]: I1008 09:17:05.236669 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.016873 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.069985 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.086461 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6361d6b6-9daa-426e-9fad-bfb1e4cf53ec" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.090224 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6361d6b6-9daa-426e-9fad-bfb1e4cf53ec" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.251130 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a42f13a8-39ee-400a-8bfa-1a3c9eeb739f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.251144 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a42f13a8-39ee-400a-8bfa-1a3c9eeb739f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 08 09:17:06 crc kubenswrapper[4958]: I1008 09:17:06.656542 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 08 09:17:10 crc kubenswrapper[4958]: I1008 09:17:10.577551 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:17:10 crc kubenswrapper[4958]: E1008 09:17:10.579100 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.079207 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.081148 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.087102 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.095405 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.245113 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.250842 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.254398 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.748300 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.757667 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 08 09:17:15 crc kubenswrapper[4958]: I1008 09:17:15.763419 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 08 09:17:25 crc kubenswrapper[4958]: I1008 09:17:25.582079 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:17:25 crc kubenswrapper[4958]: E1008 09:17:25.582842 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:17:36 crc kubenswrapper[4958]: I1008 09:17:36.577708 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:17:36 crc kubenswrapper[4958]: E1008 09:17:36.578916 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:17:48 crc kubenswrapper[4958]: I1008 09:17:48.576883 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:17:48 crc kubenswrapper[4958]: E1008 09:17:48.578147 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:18:01 crc kubenswrapper[4958]: I1008 09:18:01.577216 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:18:01 crc kubenswrapper[4958]: E1008 09:18:01.578292 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:18:16 crc kubenswrapper[4958]: I1008 09:18:16.579842 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:18:16 crc kubenswrapper[4958]: E1008 09:18:16.581229 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:18:30 crc kubenswrapper[4958]: I1008 09:18:30.576728 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:18:30 crc kubenswrapper[4958]: E1008 09:18:30.577750 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:18:45 crc kubenswrapper[4958]: I1008 09:18:45.577097 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:18:45 crc kubenswrapper[4958]: E1008 09:18:45.578283 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:18:58 crc kubenswrapper[4958]: I1008 09:18:58.577586 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:18:58 crc kubenswrapper[4958]: E1008 09:18:58.579085 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.384867 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdrlk"] Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.390679 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.396352 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdrlk"] Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.499330 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-catalog-content\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.499389 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnbp\" (UniqueName: \"kubernetes.io/projected/a200dbf0-8c84-4651-b356-8169d7148a6d-kube-api-access-fvnbp\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.499418 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-utilities\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.601565 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-catalog-content\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.601644 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnbp\" (UniqueName: \"kubernetes.io/projected/a200dbf0-8c84-4651-b356-8169d7148a6d-kube-api-access-fvnbp\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.601680 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-utilities\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.602304 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-catalog-content\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.602360 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-utilities\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.622249 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnbp\" (UniqueName: \"kubernetes.io/projected/a200dbf0-8c84-4651-b356-8169d7148a6d-kube-api-access-fvnbp\") pod \"redhat-operators-hdrlk\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:08 crc kubenswrapper[4958]: I1008 09:19:08.738737 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:09 crc kubenswrapper[4958]: I1008 09:19:09.274703 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdrlk"] Oct 08 09:19:09 crc kubenswrapper[4958]: I1008 09:19:09.316158 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerStarted","Data":"0f712db09a147e9ba9692def454b90455af83e022848c63ba92ff0db72073467"} Oct 08 09:19:10 crc kubenswrapper[4958]: I1008 09:19:10.334572 4958 generic.go:334] "Generic (PLEG): container finished" podID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerID="e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06" exitCode=0 Oct 08 09:19:10 crc kubenswrapper[4958]: I1008 09:19:10.334640 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerDied","Data":"e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06"} Oct 08 09:19:11 crc kubenswrapper[4958]: I1008 09:19:11.576534 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:19:11 crc kubenswrapper[4958]: E1008 09:19:11.577031 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:19:12 crc kubenswrapper[4958]: I1008 09:19:12.363758 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerStarted","Data":"e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122"} Oct 08 09:19:13 crc kubenswrapper[4958]: I1008 09:19:13.380626 4958 generic.go:334] "Generic (PLEG): container finished" podID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerID="e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122" exitCode=0 Oct 08 09:19:13 crc kubenswrapper[4958]: I1008 09:19:13.380716 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerDied","Data":"e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122"} Oct 08 09:19:14 crc kubenswrapper[4958]: I1008 09:19:14.400271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerStarted","Data":"5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6"} Oct 08 09:19:14 crc kubenswrapper[4958]: I1008 09:19:14.447203 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdrlk" podStartSLOduration=3.006577561 podStartE2EDuration="6.447161975s" podCreationTimestamp="2025-10-08 09:19:08 +0000 UTC" firstStartedPulling="2025-10-08 09:19:10.338040154 +0000 UTC m=+9893.467732785" lastFinishedPulling="2025-10-08 09:19:13.778624588 +0000 UTC m=+9896.908317199" observedRunningTime="2025-10-08 09:19:14.432379584 +0000 UTC m=+9897.562072275" watchObservedRunningTime="2025-10-08 09:19:14.447161975 +0000 UTC m=+9897.576854626" Oct 08 09:19:18 crc kubenswrapper[4958]: I1008 09:19:18.739559 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:18 crc kubenswrapper[4958]: I1008 09:19:18.742148 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:19 crc kubenswrapper[4958]: I1008 09:19:19.794762 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdrlk" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="registry-server" probeResult="failure" output=< Oct 08 09:19:19 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 09:19:19 crc kubenswrapper[4958]: > Oct 08 09:19:24 crc kubenswrapper[4958]: I1008 09:19:24.577248 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:19:24 crc kubenswrapper[4958]: E1008 09:19:24.578093 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:19:28 crc kubenswrapper[4958]: I1008 09:19:28.804090 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:28 crc kubenswrapper[4958]: I1008 09:19:28.879769 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:29 crc kubenswrapper[4958]: I1008 09:19:29.053834 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdrlk"] Oct 08 09:19:30 crc kubenswrapper[4958]: I1008 09:19:30.598110 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdrlk" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="registry-server" containerID="cri-o://5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6" gracePeriod=2 Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.187119 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.261817 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-catalog-content\") pod \"a200dbf0-8c84-4651-b356-8169d7148a6d\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.262180 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-utilities\") pod \"a200dbf0-8c84-4651-b356-8169d7148a6d\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.262317 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvnbp\" (UniqueName: \"kubernetes.io/projected/a200dbf0-8c84-4651-b356-8169d7148a6d-kube-api-access-fvnbp\") pod \"a200dbf0-8c84-4651-b356-8169d7148a6d\" (UID: \"a200dbf0-8c84-4651-b356-8169d7148a6d\") " Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.267174 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-utilities" (OuterVolumeSpecName: "utilities") pod "a200dbf0-8c84-4651-b356-8169d7148a6d" (UID: "a200dbf0-8c84-4651-b356-8169d7148a6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.272450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a200dbf0-8c84-4651-b356-8169d7148a6d-kube-api-access-fvnbp" (OuterVolumeSpecName: "kube-api-access-fvnbp") pod "a200dbf0-8c84-4651-b356-8169d7148a6d" (UID: "a200dbf0-8c84-4651-b356-8169d7148a6d"). InnerVolumeSpecName "kube-api-access-fvnbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.346726 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a200dbf0-8c84-4651-b356-8169d7148a6d" (UID: "a200dbf0-8c84-4651-b356-8169d7148a6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.365323 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvnbp\" (UniqueName: \"kubernetes.io/projected/a200dbf0-8c84-4651-b356-8169d7148a6d-kube-api-access-fvnbp\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.365362 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.365376 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a200dbf0-8c84-4651-b356-8169d7148a6d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.674977 4958 generic.go:334] "Generic (PLEG): container finished" podID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerID="5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6" exitCode=0 Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.675020 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerDied","Data":"5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6"} Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.675047 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdrlk" event={"ID":"a200dbf0-8c84-4651-b356-8169d7148a6d","Type":"ContainerDied","Data":"0f712db09a147e9ba9692def454b90455af83e022848c63ba92ff0db72073467"} Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.675064 4958 scope.go:117] "RemoveContainer" containerID="5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.675129 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdrlk" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.725855 4958 scope.go:117] "RemoveContainer" containerID="e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.750446 4958 scope.go:117] "RemoveContainer" containerID="e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.765085 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdrlk"] Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.807656 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdrlk"] Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.867417 4958 scope.go:117] "RemoveContainer" containerID="5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6" Oct 08 09:19:31 crc kubenswrapper[4958]: E1008 09:19:31.879430 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6\": container with ID starting with 5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6 not found: ID does not exist" containerID="5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.879474 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6"} err="failed to get container status \"5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6\": rpc error: code = NotFound desc = could not find container \"5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6\": container with ID starting with 5fa3c7f97f3a10e0a6e8b77d804827f411e00d1fb2c06df451c7f33548b57bc6 not found: ID does not exist" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.879498 4958 scope.go:117] "RemoveContainer" containerID="e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122" Oct 08 09:19:31 crc kubenswrapper[4958]: E1008 09:19:31.897082 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122\": container with ID starting with e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122 not found: ID does not exist" containerID="e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.897131 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122"} err="failed to get container status \"e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122\": rpc error: code = NotFound desc = could not find container \"e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122\": container with ID starting with e0488b7dd5231d0c840bfc628d0509ad9aaabaea9c46d947d9c661f127e86122 not found: ID does not exist" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.897159 4958 scope.go:117] "RemoveContainer" containerID="e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06" Oct 08 09:19:31 crc kubenswrapper[4958]: E1008 09:19:31.927462 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06\": container with ID starting with e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06 not found: ID does not exist" containerID="e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06" Oct 08 09:19:31 crc kubenswrapper[4958]: I1008 09:19:31.927529 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06"} err="failed to get container status \"e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06\": rpc error: code = NotFound desc = could not find container \"e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06\": container with ID starting with e7edcc6918d8a5e2c1b1de593c2d2c3bcff0bb29e7ac0e3b2bb882fc49641f06 not found: ID does not exist" Oct 08 09:19:33 crc kubenswrapper[4958]: I1008 09:19:33.600764 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" path="/var/lib/kubelet/pods/a200dbf0-8c84-4651-b356-8169d7148a6d/volumes" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.577767 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.790708 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9rnk7"] Oct 08 09:19:39 crc kubenswrapper[4958]: E1008 09:19:39.791622 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="registry-server" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.791650 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="registry-server" Oct 08 09:19:39 crc kubenswrapper[4958]: E1008 09:19:39.791708 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="extract-utilities" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.791721 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="extract-utilities" Oct 08 09:19:39 crc kubenswrapper[4958]: E1008 09:19:39.791761 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="extract-content" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.791774 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="extract-content" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.792171 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a200dbf0-8c84-4651-b356-8169d7148a6d" containerName="registry-server" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.794908 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.811743 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rnk7"] Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.889069 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-utilities\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.889158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-catalog-content\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.889388 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpcm\" (UniqueName: \"kubernetes.io/projected/904d7426-e3fe-455a-a36a-a3510fc22633-kube-api-access-sqpcm\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.992070 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqpcm\" (UniqueName: \"kubernetes.io/projected/904d7426-e3fe-455a-a36a-a3510fc22633-kube-api-access-sqpcm\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.992261 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-utilities\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.992300 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-catalog-content\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.993003 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-catalog-content\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:39 crc kubenswrapper[4958]: I1008 09:19:39.993340 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-utilities\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:40 crc kubenswrapper[4958]: I1008 09:19:40.012921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqpcm\" (UniqueName: \"kubernetes.io/projected/904d7426-e3fe-455a-a36a-a3510fc22633-kube-api-access-sqpcm\") pod \"redhat-marketplace-9rnk7\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:40 crc kubenswrapper[4958]: I1008 09:19:40.148728 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:40 crc kubenswrapper[4958]: I1008 09:19:40.642333 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rnk7"] Oct 08 09:19:40 crc kubenswrapper[4958]: W1008 09:19:40.656715 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod904d7426_e3fe_455a_a36a_a3510fc22633.slice/crio-23278ec5064f6a8703bf3f1c9db5305e2509b66c95f48abf5c534c6063aee949 WatchSource:0}: Error finding container 23278ec5064f6a8703bf3f1c9db5305e2509b66c95f48abf5c534c6063aee949: Status 404 returned error can't find the container with id 23278ec5064f6a8703bf3f1c9db5305e2509b66c95f48abf5c534c6063aee949 Oct 08 09:19:40 crc kubenswrapper[4958]: I1008 09:19:40.791239 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rnk7" event={"ID":"904d7426-e3fe-455a-a36a-a3510fc22633","Type":"ContainerStarted","Data":"23278ec5064f6a8703bf3f1c9db5305e2509b66c95f48abf5c534c6063aee949"} Oct 08 09:19:40 crc kubenswrapper[4958]: I1008 09:19:40.794267 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"0300d467462acf668ab82487cfb3cada399a67ec010201dfde2ad77bf86a7628"} Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.658081 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nvvmz"] Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.661196 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.678545 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvvmz"] Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.749162 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-utilities\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.749281 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-catalog-content\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.749324 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ngt\" (UniqueName: \"kubernetes.io/projected/7f25792e-bffd-4abf-b5aa-e1a575ac6238-kube-api-access-78ngt\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.803469 4958 generic.go:334] "Generic (PLEG): container finished" podID="904d7426-e3fe-455a-a36a-a3510fc22633" containerID="9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852" exitCode=0 Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.803547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rnk7" event={"ID":"904d7426-e3fe-455a-a36a-a3510fc22633","Type":"ContainerDied","Data":"9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852"} Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.805793 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.852613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-catalog-content\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.852708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78ngt\" (UniqueName: \"kubernetes.io/projected/7f25792e-bffd-4abf-b5aa-e1a575ac6238-kube-api-access-78ngt\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.852853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-utilities\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.853424 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-catalog-content\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.853801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-utilities\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.892168 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78ngt\" (UniqueName: \"kubernetes.io/projected/7f25792e-bffd-4abf-b5aa-e1a575ac6238-kube-api-access-78ngt\") pod \"certified-operators-nvvmz\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:41 crc kubenswrapper[4958]: I1008 09:19:41.987237 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:42 crc kubenswrapper[4958]: I1008 09:19:42.545122 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvvmz"] Oct 08 09:19:42 crc kubenswrapper[4958]: W1008 09:19:42.549121 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f25792e_bffd_4abf_b5aa_e1a575ac6238.slice/crio-ac2ed13af15aa3ad4280348f81b783e3d49e1da8e27c4ad69aa64ea7758acd26 WatchSource:0}: Error finding container ac2ed13af15aa3ad4280348f81b783e3d49e1da8e27c4ad69aa64ea7758acd26: Status 404 returned error can't find the container with id ac2ed13af15aa3ad4280348f81b783e3d49e1da8e27c4ad69aa64ea7758acd26 Oct 08 09:19:42 crc kubenswrapper[4958]: I1008 09:19:42.815877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvvmz" event={"ID":"7f25792e-bffd-4abf-b5aa-e1a575ac6238","Type":"ContainerStarted","Data":"ac2ed13af15aa3ad4280348f81b783e3d49e1da8e27c4ad69aa64ea7758acd26"} Oct 08 09:19:43 crc kubenswrapper[4958]: I1008 09:19:43.833275 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerID="0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb" exitCode=0 Oct 08 09:19:43 crc kubenswrapper[4958]: I1008 09:19:43.833552 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvvmz" event={"ID":"7f25792e-bffd-4abf-b5aa-e1a575ac6238","Type":"ContainerDied","Data":"0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb"} Oct 08 09:19:43 crc kubenswrapper[4958]: I1008 09:19:43.839608 4958 generic.go:334] "Generic (PLEG): container finished" podID="904d7426-e3fe-455a-a36a-a3510fc22633" containerID="c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4" exitCode=0 Oct 08 09:19:43 crc kubenswrapper[4958]: I1008 09:19:43.839665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rnk7" event={"ID":"904d7426-e3fe-455a-a36a-a3510fc22633","Type":"ContainerDied","Data":"c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4"} Oct 08 09:19:45 crc kubenswrapper[4958]: I1008 09:19:45.872890 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerID="cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594" exitCode=0 Oct 08 09:19:45 crc kubenswrapper[4958]: I1008 09:19:45.872933 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvvmz" event={"ID":"7f25792e-bffd-4abf-b5aa-e1a575ac6238","Type":"ContainerDied","Data":"cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594"} Oct 08 09:19:45 crc kubenswrapper[4958]: I1008 09:19:45.877534 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rnk7" event={"ID":"904d7426-e3fe-455a-a36a-a3510fc22633","Type":"ContainerStarted","Data":"710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25"} Oct 08 09:19:45 crc kubenswrapper[4958]: I1008 09:19:45.932498 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9rnk7" podStartSLOduration=4.430605239 podStartE2EDuration="6.932468569s" podCreationTimestamp="2025-10-08 09:19:39 +0000 UTC" firstStartedPulling="2025-10-08 09:19:41.805585975 +0000 UTC m=+9924.935278576" lastFinishedPulling="2025-10-08 09:19:44.307449285 +0000 UTC m=+9927.437141906" observedRunningTime="2025-10-08 09:19:45.918619653 +0000 UTC m=+9929.048312304" watchObservedRunningTime="2025-10-08 09:19:45.932468569 +0000 UTC m=+9929.062161190" Oct 08 09:19:46 crc kubenswrapper[4958]: I1008 09:19:46.890826 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvvmz" event={"ID":"7f25792e-bffd-4abf-b5aa-e1a575ac6238","Type":"ContainerStarted","Data":"30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580"} Oct 08 09:19:46 crc kubenswrapper[4958]: I1008 09:19:46.919502 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nvvmz" podStartSLOduration=3.459987389 podStartE2EDuration="5.919483317s" podCreationTimestamp="2025-10-08 09:19:41 +0000 UTC" firstStartedPulling="2025-10-08 09:19:43.837694787 +0000 UTC m=+9926.967387418" lastFinishedPulling="2025-10-08 09:19:46.297190725 +0000 UTC m=+9929.426883346" observedRunningTime="2025-10-08 09:19:46.912820476 +0000 UTC m=+9930.042513067" watchObservedRunningTime="2025-10-08 09:19:46.919483317 +0000 UTC m=+9930.049175918" Oct 08 09:19:50 crc kubenswrapper[4958]: I1008 09:19:50.148904 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:50 crc kubenswrapper[4958]: I1008 09:19:50.149681 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:50 crc kubenswrapper[4958]: I1008 09:19:50.224169 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:51 crc kubenswrapper[4958]: I1008 09:19:51.007936 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:51 crc kubenswrapper[4958]: I1008 09:19:51.366137 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rnk7"] Oct 08 09:19:51 crc kubenswrapper[4958]: I1008 09:19:51.988889 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:51 crc kubenswrapper[4958]: I1008 09:19:51.989028 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:52 crc kubenswrapper[4958]: I1008 09:19:52.082683 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:52 crc kubenswrapper[4958]: I1008 09:19:52.960060 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9rnk7" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="registry-server" containerID="cri-o://710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25" gracePeriod=2 Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.044165 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.434292 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.540736 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-catalog-content\") pod \"904d7426-e3fe-455a-a36a-a3510fc22633\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.541065 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqpcm\" (UniqueName: \"kubernetes.io/projected/904d7426-e3fe-455a-a36a-a3510fc22633-kube-api-access-sqpcm\") pod \"904d7426-e3fe-455a-a36a-a3510fc22633\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.541090 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-utilities\") pod \"904d7426-e3fe-455a-a36a-a3510fc22633\" (UID: \"904d7426-e3fe-455a-a36a-a3510fc22633\") " Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.543098 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-utilities" (OuterVolumeSpecName: "utilities") pod "904d7426-e3fe-455a-a36a-a3510fc22633" (UID: "904d7426-e3fe-455a-a36a-a3510fc22633"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.551246 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904d7426-e3fe-455a-a36a-a3510fc22633-kube-api-access-sqpcm" (OuterVolumeSpecName: "kube-api-access-sqpcm") pod "904d7426-e3fe-455a-a36a-a3510fc22633" (UID: "904d7426-e3fe-455a-a36a-a3510fc22633"). InnerVolumeSpecName "kube-api-access-sqpcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.559694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "904d7426-e3fe-455a-a36a-a3510fc22633" (UID: "904d7426-e3fe-455a-a36a-a3510fc22633"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.643659 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.643845 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqpcm\" (UniqueName: \"kubernetes.io/projected/904d7426-e3fe-455a-a36a-a3510fc22633-kube-api-access-sqpcm\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.643863 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904d7426-e3fe-455a-a36a-a3510fc22633-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.974820 4958 generic.go:334] "Generic (PLEG): container finished" podID="904d7426-e3fe-455a-a36a-a3510fc22633" containerID="710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25" exitCode=0 Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.974895 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rnk7" Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.974988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rnk7" event={"ID":"904d7426-e3fe-455a-a36a-a3510fc22633","Type":"ContainerDied","Data":"710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25"} Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.975031 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rnk7" event={"ID":"904d7426-e3fe-455a-a36a-a3510fc22633","Type":"ContainerDied","Data":"23278ec5064f6a8703bf3f1c9db5305e2509b66c95f48abf5c534c6063aee949"} Oct 08 09:19:53 crc kubenswrapper[4958]: I1008 09:19:53.975060 4958 scope.go:117] "RemoveContainer" containerID="710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.013378 4958 scope.go:117] "RemoveContainer" containerID="c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.018086 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rnk7"] Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.033747 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rnk7"] Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.046650 4958 scope.go:117] "RemoveContainer" containerID="9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.116174 4958 scope.go:117] "RemoveContainer" containerID="710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25" Oct 08 09:19:54 crc kubenswrapper[4958]: E1008 09:19:54.116763 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25\": container with ID starting with 710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25 not found: ID does not exist" containerID="710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.116800 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25"} err="failed to get container status \"710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25\": rpc error: code = NotFound desc = could not find container \"710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25\": container with ID starting with 710114196aae2141836fdaa54fbae88bf4d4004b2136652507291014f371ae25 not found: ID does not exist" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.116823 4958 scope.go:117] "RemoveContainer" containerID="c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4" Oct 08 09:19:54 crc kubenswrapper[4958]: E1008 09:19:54.117186 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4\": container with ID starting with c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4 not found: ID does not exist" containerID="c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.117219 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4"} err="failed to get container status \"c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4\": rpc error: code = NotFound desc = could not find container \"c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4\": container with ID starting with c860718e2f98a4f002278ceb93152f11e9b4b7f07e4a5a1feea3e111870e61f4 not found: ID does not exist" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.117238 4958 scope.go:117] "RemoveContainer" containerID="9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852" Oct 08 09:19:54 crc kubenswrapper[4958]: E1008 09:19:54.117577 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852\": container with ID starting with 9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852 not found: ID does not exist" containerID="9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.117616 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852"} err="failed to get container status \"9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852\": rpc error: code = NotFound desc = could not find container \"9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852\": container with ID starting with 9a3ef2e222bf7c3e78b9b80e794f081e9ecbb875cf2f1d80081481ca26f00852 not found: ID does not exist" Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.381433 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvvmz"] Oct 08 09:19:54 crc kubenswrapper[4958]: I1008 09:19:54.994196 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nvvmz" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="registry-server" containerID="cri-o://30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580" gracePeriod=2 Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.590435 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" path="/var/lib/kubelet/pods/904d7426-e3fe-455a-a36a-a3510fc22633/volumes" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.622652 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.702180 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-utilities\") pod \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.702354 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78ngt\" (UniqueName: \"kubernetes.io/projected/7f25792e-bffd-4abf-b5aa-e1a575ac6238-kube-api-access-78ngt\") pod \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.702407 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-catalog-content\") pod \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\" (UID: \"7f25792e-bffd-4abf-b5aa-e1a575ac6238\") " Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.703555 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-utilities" (OuterVolumeSpecName: "utilities") pod "7f25792e-bffd-4abf-b5aa-e1a575ac6238" (UID: "7f25792e-bffd-4abf-b5aa-e1a575ac6238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.712155 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f25792e-bffd-4abf-b5aa-e1a575ac6238-kube-api-access-78ngt" (OuterVolumeSpecName: "kube-api-access-78ngt") pod "7f25792e-bffd-4abf-b5aa-e1a575ac6238" (UID: "7f25792e-bffd-4abf-b5aa-e1a575ac6238"). InnerVolumeSpecName "kube-api-access-78ngt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.752756 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f25792e-bffd-4abf-b5aa-e1a575ac6238" (UID: "7f25792e-bffd-4abf-b5aa-e1a575ac6238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.804761 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.804795 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78ngt\" (UniqueName: \"kubernetes.io/projected/7f25792e-bffd-4abf-b5aa-e1a575ac6238-kube-api-access-78ngt\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:55 crc kubenswrapper[4958]: I1008 09:19:55.804805 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f25792e-bffd-4abf-b5aa-e1a575ac6238-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.023468 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerID="30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580" exitCode=0 Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.023531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvvmz" event={"ID":"7f25792e-bffd-4abf-b5aa-e1a575ac6238","Type":"ContainerDied","Data":"30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580"} Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.023564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvvmz" event={"ID":"7f25792e-bffd-4abf-b5aa-e1a575ac6238","Type":"ContainerDied","Data":"ac2ed13af15aa3ad4280348f81b783e3d49e1da8e27c4ad69aa64ea7758acd26"} Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.023585 4958 scope.go:117] "RemoveContainer" containerID="30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.023718 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvvmz" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.068452 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvvmz"] Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.077198 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nvvmz"] Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.090703 4958 scope.go:117] "RemoveContainer" containerID="cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.118046 4958 scope.go:117] "RemoveContainer" containerID="0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.178540 4958 scope.go:117] "RemoveContainer" containerID="30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580" Oct 08 09:19:56 crc kubenswrapper[4958]: E1008 09:19:56.179216 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580\": container with ID starting with 30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580 not found: ID does not exist" containerID="30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.179274 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580"} err="failed to get container status \"30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580\": rpc error: code = NotFound desc = could not find container \"30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580\": container with ID starting with 30a83662e6588bb79d1a31f2b3a68ce75189f5da5934f1a9278251d925f74580 not found: ID does not exist" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.179309 4958 scope.go:117] "RemoveContainer" containerID="cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594" Oct 08 09:19:56 crc kubenswrapper[4958]: E1008 09:19:56.179713 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594\": container with ID starting with cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594 not found: ID does not exist" containerID="cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.179754 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594"} err="failed to get container status \"cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594\": rpc error: code = NotFound desc = could not find container \"cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594\": container with ID starting with cd9291193588409902554336665602a486d98890504f08592bfe4d4ae8e69594 not found: ID does not exist" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.179783 4958 scope.go:117] "RemoveContainer" containerID="0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb" Oct 08 09:19:56 crc kubenswrapper[4958]: E1008 09:19:56.180124 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb\": container with ID starting with 0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb not found: ID does not exist" containerID="0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb" Oct 08 09:19:56 crc kubenswrapper[4958]: I1008 09:19:56.180153 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb"} err="failed to get container status \"0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb\": rpc error: code = NotFound desc = could not find container \"0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb\": container with ID starting with 0cf389cba86a1439b24e4220e14fe6e2673d20441ee30a54f2ab495023b1bccb not found: ID does not exist" Oct 08 09:19:57 crc kubenswrapper[4958]: I1008 09:19:57.607738 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" path="/var/lib/kubelet/pods/7f25792e-bffd-4abf-b5aa-e1a575ac6238/volumes" Oct 08 09:22:06 crc kubenswrapper[4958]: I1008 09:22:06.844907 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:22:06 crc kubenswrapper[4958]: I1008 09:22:06.847122 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:22:36 crc kubenswrapper[4958]: I1008 09:22:36.844915 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:22:36 crc kubenswrapper[4958]: I1008 09:22:36.845499 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:23:06 crc kubenswrapper[4958]: I1008 09:23:06.845601 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:23:06 crc kubenswrapper[4958]: I1008 09:23:06.846318 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:23:06 crc kubenswrapper[4958]: I1008 09:23:06.846394 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:23:06 crc kubenswrapper[4958]: I1008 09:23:06.847733 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0300d467462acf668ab82487cfb3cada399a67ec010201dfde2ad77bf86a7628"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:23:06 crc kubenswrapper[4958]: I1008 09:23:06.847846 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://0300d467462acf668ab82487cfb3cada399a67ec010201dfde2ad77bf86a7628" gracePeriod=600 Oct 08 09:23:07 crc kubenswrapper[4958]: I1008 09:23:07.486235 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="0300d467462acf668ab82487cfb3cada399a67ec010201dfde2ad77bf86a7628" exitCode=0 Oct 08 09:23:07 crc kubenswrapper[4958]: I1008 09:23:07.486301 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"0300d467462acf668ab82487cfb3cada399a67ec010201dfde2ad77bf86a7628"} Oct 08 09:23:07 crc kubenswrapper[4958]: I1008 09:23:07.486634 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607"} Oct 08 09:23:07 crc kubenswrapper[4958]: I1008 09:23:07.486688 4958 scope.go:117] "RemoveContainer" containerID="75c68644e65efae662d6c69a59b514e6aaeb87febd2224d5dec0e8b7e16fd985" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.012017 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kwdmh"] Oct 08 09:24:51 crc kubenswrapper[4958]: E1008 09:24:51.012972 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="extract-content" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.012986 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="extract-content" Oct 08 09:24:51 crc kubenswrapper[4958]: E1008 09:24:51.013019 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="registry-server" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013026 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="registry-server" Oct 08 09:24:51 crc kubenswrapper[4958]: E1008 09:24:51.013037 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="registry-server" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013043 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="registry-server" Oct 08 09:24:51 crc kubenswrapper[4958]: E1008 09:24:51.013054 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="extract-content" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013060 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="extract-content" Oct 08 09:24:51 crc kubenswrapper[4958]: E1008 09:24:51.013078 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="extract-utilities" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013084 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="extract-utilities" Oct 08 09:24:51 crc kubenswrapper[4958]: E1008 09:24:51.013095 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="extract-utilities" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013101 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="extract-utilities" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013317 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f25792e-bffd-4abf-b5aa-e1a575ac6238" containerName="registry-server" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.013336 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="904d7426-e3fe-455a-a36a-a3510fc22633" containerName="registry-server" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.014891 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.021199 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwdmh"] Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.155059 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-catalog-content\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.155544 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-utilities\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.155731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwrq6\" (UniqueName: \"kubernetes.io/projected/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-kube-api-access-rwrq6\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.257173 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwrq6\" (UniqueName: \"kubernetes.io/projected/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-kube-api-access-rwrq6\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.257287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-catalog-content\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.257343 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-utilities\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.257829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-catalog-content\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.258025 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-utilities\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.275978 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwrq6\" (UniqueName: \"kubernetes.io/projected/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-kube-api-access-rwrq6\") pod \"community-operators-kwdmh\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.360980 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:24:51 crc kubenswrapper[4958]: I1008 09:24:51.929543 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwdmh"] Oct 08 09:24:52 crc kubenswrapper[4958]: I1008 09:24:52.776074 4958 generic.go:334] "Generic (PLEG): container finished" podID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerID="b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744" exitCode=0 Oct 08 09:24:52 crc kubenswrapper[4958]: I1008 09:24:52.776322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwdmh" event={"ID":"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad","Type":"ContainerDied","Data":"b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744"} Oct 08 09:24:52 crc kubenswrapper[4958]: I1008 09:24:52.776732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwdmh" event={"ID":"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad","Type":"ContainerStarted","Data":"00bd5fbcbeef47126363281d3ff9ec4a515d309dbce7a0ad92c73ddc20e75103"} Oct 08 09:24:52 crc kubenswrapper[4958]: I1008 09:24:52.780203 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:24:54 crc kubenswrapper[4958]: I1008 09:24:54.812121 4958 generic.go:334] "Generic (PLEG): container finished" podID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerID="25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883" exitCode=0 Oct 08 09:24:54 crc kubenswrapper[4958]: I1008 09:24:54.812199 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwdmh" event={"ID":"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad","Type":"ContainerDied","Data":"25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883"} Oct 08 09:24:55 crc kubenswrapper[4958]: I1008 09:24:55.828173 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwdmh" event={"ID":"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad","Type":"ContainerStarted","Data":"b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55"} Oct 08 09:24:55 crc kubenswrapper[4958]: I1008 09:24:55.859285 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kwdmh" podStartSLOduration=3.3589690709999998 podStartE2EDuration="5.859256738s" podCreationTimestamp="2025-10-08 09:24:50 +0000 UTC" firstStartedPulling="2025-10-08 09:24:52.779441251 +0000 UTC m=+10235.909133902" lastFinishedPulling="2025-10-08 09:24:55.279728918 +0000 UTC m=+10238.409421569" observedRunningTime="2025-10-08 09:24:55.847783117 +0000 UTC m=+10238.977475758" watchObservedRunningTime="2025-10-08 09:24:55.859256738 +0000 UTC m=+10238.988949359" Oct 08 09:25:01 crc kubenswrapper[4958]: I1008 09:25:01.361299 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:25:01 crc kubenswrapper[4958]: I1008 09:25:01.362109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:25:01 crc kubenswrapper[4958]: I1008 09:25:01.428921 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:25:01 crc kubenswrapper[4958]: I1008 09:25:01.995231 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:25:03 crc kubenswrapper[4958]: I1008 09:25:03.181355 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwdmh"] Oct 08 09:25:03 crc kubenswrapper[4958]: I1008 09:25:03.922602 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kwdmh" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="registry-server" containerID="cri-o://b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55" gracePeriod=2 Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.523175 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.602731 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-utilities\") pod \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.602944 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwrq6\" (UniqueName: \"kubernetes.io/projected/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-kube-api-access-rwrq6\") pod \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.603035 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-catalog-content\") pod \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\" (UID: \"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad\") " Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.604028 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-utilities" (OuterVolumeSpecName: "utilities") pod "65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" (UID: "65256bd1-5b6f-4a7e-86cb-aeb0184b27ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.612048 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-kube-api-access-rwrq6" (OuterVolumeSpecName: "kube-api-access-rwrq6") pod "65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" (UID: "65256bd1-5b6f-4a7e-86cb-aeb0184b27ad"). InnerVolumeSpecName "kube-api-access-rwrq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.663001 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" (UID: "65256bd1-5b6f-4a7e-86cb-aeb0184b27ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.707649 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.707696 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.707717 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwrq6\" (UniqueName: \"kubernetes.io/projected/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad-kube-api-access-rwrq6\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.936265 4958 generic.go:334] "Generic (PLEG): container finished" podID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerID="b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55" exitCode=0 Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.936325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwdmh" event={"ID":"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad","Type":"ContainerDied","Data":"b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55"} Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.936767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwdmh" event={"ID":"65256bd1-5b6f-4a7e-86cb-aeb0184b27ad","Type":"ContainerDied","Data":"00bd5fbcbeef47126363281d3ff9ec4a515d309dbce7a0ad92c73ddc20e75103"} Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.936816 4958 scope.go:117] "RemoveContainer" containerID="b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.936341 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwdmh" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.981426 4958 scope.go:117] "RemoveContainer" containerID="25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883" Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.985381 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwdmh"] Oct 08 09:25:04 crc kubenswrapper[4958]: I1008 09:25:04.995931 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kwdmh"] Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.037895 4958 scope.go:117] "RemoveContainer" containerID="b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.076439 4958 scope.go:117] "RemoveContainer" containerID="b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55" Oct 08 09:25:05 crc kubenswrapper[4958]: E1008 09:25:05.077494 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55\": container with ID starting with b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55 not found: ID does not exist" containerID="b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.077567 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55"} err="failed to get container status \"b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55\": rpc error: code = NotFound desc = could not find container \"b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55\": container with ID starting with b72cd97d9064e54b38344c4d35ee86ffa1ce218ccd96b8fffc735fb329cdba55 not found: ID does not exist" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.077616 4958 scope.go:117] "RemoveContainer" containerID="25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883" Oct 08 09:25:05 crc kubenswrapper[4958]: E1008 09:25:05.078255 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883\": container with ID starting with 25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883 not found: ID does not exist" containerID="25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.078325 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883"} err="failed to get container status \"25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883\": rpc error: code = NotFound desc = could not find container \"25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883\": container with ID starting with 25ab662538fdaa687c959a00fb4af420fd8fc4e1727091520ea31c0de1bd0883 not found: ID does not exist" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.078388 4958 scope.go:117] "RemoveContainer" containerID="b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744" Oct 08 09:25:05 crc kubenswrapper[4958]: E1008 09:25:05.079176 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744\": container with ID starting with b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744 not found: ID does not exist" containerID="b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.079249 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744"} err="failed to get container status \"b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744\": rpc error: code = NotFound desc = could not find container \"b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744\": container with ID starting with b1acce430183b50fe7d16d64790a82dee01c84b836f4f2dcc1c4634888e79744 not found: ID does not exist" Oct 08 09:25:05 crc kubenswrapper[4958]: I1008 09:25:05.597832 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" path="/var/lib/kubelet/pods/65256bd1-5b6f-4a7e-86cb-aeb0184b27ad/volumes" Oct 08 09:25:29 crc kubenswrapper[4958]: I1008 09:25:29.272410 4958 generic.go:334] "Generic (PLEG): container finished" podID="093ebdbf-a8f2-422f-ad7f-a7893ea25990" containerID="c91fcfadd3f705c0299381097a8ced17ded75fa5c14b44eda0b89cc861e979e1" exitCode=0 Oct 08 09:25:29 crc kubenswrapper[4958]: I1008 09:25:29.272526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" event={"ID":"093ebdbf-a8f2-422f-ad7f-a7893ea25990","Type":"ContainerDied","Data":"c91fcfadd3f705c0299381097a8ced17ded75fa5c14b44eda0b89cc861e979e1"} Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.946808 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.982689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-1\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.982774 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-inventory\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.982824 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-ssh-key\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.982920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-combined-ca-bundle\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.982995 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-0\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.983023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-1\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.983065 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5czwc\" (UniqueName: \"kubernetes.io/projected/093ebdbf-a8f2-422f-ad7f-a7893ea25990-kube-api-access-5czwc\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.983226 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-0\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:30 crc kubenswrapper[4958]: I1008 09:25:30.983308 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cells-global-config-0\") pod \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\" (UID: \"093ebdbf-a8f2-422f-ad7f-a7893ea25990\") " Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.013770 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093ebdbf-a8f2-422f-ad7f-a7893ea25990-kube-api-access-5czwc" (OuterVolumeSpecName: "kube-api-access-5czwc") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "kube-api-access-5czwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.013821 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.039369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.054434 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.064577 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.083820 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.086774 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.086797 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.086810 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5czwc\" (UniqueName: \"kubernetes.io/projected/093ebdbf-a8f2-422f-ad7f-a7893ea25990-kube-api-access-5czwc\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.086822 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.086835 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.086848 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.095571 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.095723 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.100777 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-inventory" (OuterVolumeSpecName: "inventory") pod "093ebdbf-a8f2-422f-ad7f-a7893ea25990" (UID: "093ebdbf-a8f2-422f-ad7f-a7893ea25990"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.189539 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.189596 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-inventory\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.189609 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/093ebdbf-a8f2-422f-ad7f-a7893ea25990-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.303335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" event={"ID":"093ebdbf-a8f2-422f-ad7f-a7893ea25990","Type":"ContainerDied","Data":"78aabd4ef6c7ee55ab8c26a1ac5352aacce226fc6e2add903f1258b877a13ac0"} Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.303382 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78aabd4ef6c7ee55ab8c26a1ac5352aacce226fc6e2add903f1258b877a13ac0" Oct 08 09:25:31 crc kubenswrapper[4958]: I1008 09:25:31.303529 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr" Oct 08 09:25:36 crc kubenswrapper[4958]: I1008 09:25:36.844849 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:25:36 crc kubenswrapper[4958]: I1008 09:25:36.845688 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:26:06 crc kubenswrapper[4958]: I1008 09:26:06.844846 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:26:06 crc kubenswrapper[4958]: I1008 09:26:06.845794 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:26:36 crc kubenswrapper[4958]: I1008 09:26:36.844711 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:26:36 crc kubenswrapper[4958]: I1008 09:26:36.845450 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:26:36 crc kubenswrapper[4958]: I1008 09:26:36.845516 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:26:36 crc kubenswrapper[4958]: I1008 09:26:36.846419 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:26:36 crc kubenswrapper[4958]: I1008 09:26:36.846517 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" gracePeriod=600 Oct 08 09:26:36 crc kubenswrapper[4958]: E1008 09:26:36.975912 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:26:37 crc kubenswrapper[4958]: I1008 09:26:37.177464 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" exitCode=0 Oct 08 09:26:37 crc kubenswrapper[4958]: I1008 09:26:37.177515 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607"} Oct 08 09:26:37 crc kubenswrapper[4958]: I1008 09:26:37.177555 4958 scope.go:117] "RemoveContainer" containerID="0300d467462acf668ab82487cfb3cada399a67ec010201dfde2ad77bf86a7628" Oct 08 09:26:37 crc kubenswrapper[4958]: I1008 09:26:37.178163 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:26:37 crc kubenswrapper[4958]: E1008 09:26:37.178469 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:26:48 crc kubenswrapper[4958]: I1008 09:26:48.576640 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:26:48 crc kubenswrapper[4958]: E1008 09:26:48.577760 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:26:59 crc kubenswrapper[4958]: I1008 09:26:59.577454 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:26:59 crc kubenswrapper[4958]: E1008 09:26:59.578839 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:27:05 crc kubenswrapper[4958]: I1008 09:27:05.962071 4958 trace.go:236] Trace[1342084336]: "Calculate volume metrics of ovn-data for pod openstack/ovn-copy-data" (08-Oct-2025 09:27:04.940) (total time: 1021ms): Oct 08 09:27:05 crc kubenswrapper[4958]: Trace[1342084336]: [1.02171765s] [1.02171765s] END Oct 08 09:27:12 crc kubenswrapper[4958]: I1008 09:27:12.328232 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 09:27:12 crc kubenswrapper[4958]: I1008 09:27:12.330252 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" containerName="adoption" containerID="cri-o://30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2" gracePeriod=30 Oct 08 09:27:14 crc kubenswrapper[4958]: I1008 09:27:14.577141 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:27:14 crc kubenswrapper[4958]: E1008 09:27:14.578321 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:27:28 crc kubenswrapper[4958]: I1008 09:27:28.576633 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:27:28 crc kubenswrapper[4958]: E1008 09:27:28.580209 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:27:40 crc kubenswrapper[4958]: I1008 09:27:40.577699 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:27:40 crc kubenswrapper[4958]: E1008 09:27:40.578669 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:27:42 crc kubenswrapper[4958]: I1008 09:27:42.847439 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 08 09:27:42 crc kubenswrapper[4958]: I1008 09:27:42.943751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e-kube-api-access-5pclh\") pod \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " Oct 08 09:27:42 crc kubenswrapper[4958]: I1008 09:27:42.944576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") pod \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\" (UID: \"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e\") " Oct 08 09:27:42 crc kubenswrapper[4958]: I1008 09:27:42.951638 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e-kube-api-access-5pclh" (OuterVolumeSpecName: "kube-api-access-5pclh") pod "9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" (UID: "9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e"). InnerVolumeSpecName "kube-api-access-5pclh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:27:42 crc kubenswrapper[4958]: I1008 09:27:42.976169 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e" (OuterVolumeSpecName: "mariadb-data") pod "9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" (UID: "9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e"). InnerVolumeSpecName "pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.047126 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pclh\" (UniqueName: \"kubernetes.io/projected/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e-kube-api-access-5pclh\") on node \"crc\" DevicePath \"\"" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.047206 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") on node \"crc\" " Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.056005 4958 generic.go:334] "Generic (PLEG): container finished" podID="9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" containerID="30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2" exitCode=137 Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.056063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e","Type":"ContainerDied","Data":"30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2"} Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.056100 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e","Type":"ContainerDied","Data":"267188183c4b8978e4c374d8da7310837d5667892e3ef7f77ec8236bccca3eb8"} Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.056124 4958 scope.go:117] "RemoveContainer" containerID="30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.056306 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.082423 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.082655 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e") on node "crc" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.095160 4958 scope.go:117] "RemoveContainer" containerID="30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2" Oct 08 09:27:43 crc kubenswrapper[4958]: E1008 09:27:43.095697 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2\": container with ID starting with 30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2 not found: ID does not exist" containerID="30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.095730 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2"} err="failed to get container status \"30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2\": rpc error: code = NotFound desc = could not find container \"30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2\": container with ID starting with 30675e0cf41c9dfcdb78a3d338cc447d22cf76c87418a95e708f2cae82226ee2 not found: ID does not exist" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.113996 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.124730 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.149757 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4345689e-cdf3-411a-b3ea-e33e066fde8e\") on node \"crc\" DevicePath \"\"" Oct 08 09:27:43 crc kubenswrapper[4958]: I1008 09:27:43.599411 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" path="/var/lib/kubelet/pods/9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e/volumes" Oct 08 09:27:44 crc kubenswrapper[4958]: I1008 09:27:44.048069 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Oct 08 09:27:44 crc kubenswrapper[4958]: I1008 09:27:44.048727 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="3145cf95-11d2-48dd-aa60-f8f4dc91d130" containerName="adoption" containerID="cri-o://996dbf9acb795e8dfd22760f22da69c08a8c759c537d4f1cc1a09451a135fb7c" gracePeriod=30 Oct 08 09:27:54 crc kubenswrapper[4958]: I1008 09:27:54.576999 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:27:54 crc kubenswrapper[4958]: E1008 09:27:54.577766 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:28:07 crc kubenswrapper[4958]: I1008 09:28:07.595622 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:28:07 crc kubenswrapper[4958]: E1008 09:28:07.596755 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.455831 4958 generic.go:334] "Generic (PLEG): container finished" podID="3145cf95-11d2-48dd-aa60-f8f4dc91d130" containerID="996dbf9acb795e8dfd22760f22da69c08a8c759c537d4f1cc1a09451a135fb7c" exitCode=137 Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.455877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3145cf95-11d2-48dd-aa60-f8f4dc91d130","Type":"ContainerDied","Data":"996dbf9acb795e8dfd22760f22da69c08a8c759c537d4f1cc1a09451a135fb7c"} Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.675753 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.769715 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr27k\" (UniqueName: \"kubernetes.io/projected/3145cf95-11d2-48dd-aa60-f8f4dc91d130-kube-api-access-kr27k\") pod \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.769802 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3145cf95-11d2-48dd-aa60-f8f4dc91d130-ovn-data-cert\") pod \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.770618 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") pod \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\" (UID: \"3145cf95-11d2-48dd-aa60-f8f4dc91d130\") " Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.776428 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3145cf95-11d2-48dd-aa60-f8f4dc91d130-kube-api-access-kr27k" (OuterVolumeSpecName: "kube-api-access-kr27k") pod "3145cf95-11d2-48dd-aa60-f8f4dc91d130" (UID: "3145cf95-11d2-48dd-aa60-f8f4dc91d130"). InnerVolumeSpecName "kube-api-access-kr27k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.777560 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3145cf95-11d2-48dd-aa60-f8f4dc91d130-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "3145cf95-11d2-48dd-aa60-f8f4dc91d130" (UID: "3145cf95-11d2-48dd-aa60-f8f4dc91d130"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.797672 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202" (OuterVolumeSpecName: "ovn-data") pod "3145cf95-11d2-48dd-aa60-f8f4dc91d130" (UID: "3145cf95-11d2-48dd-aa60-f8f4dc91d130"). InnerVolumeSpecName "pvc-269d466d-4666-4d00-818d-2e3ec2b0b202". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.873619 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") on node \"crc\" " Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.873665 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr27k\" (UniqueName: \"kubernetes.io/projected/3145cf95-11d2-48dd-aa60-f8f4dc91d130-kube-api-access-kr27k\") on node \"crc\" DevicePath \"\"" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.873712 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3145cf95-11d2-48dd-aa60-f8f4dc91d130-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.924716 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.924885 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-269d466d-4666-4d00-818d-2e3ec2b0b202" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202") on node "crc" Oct 08 09:28:14 crc kubenswrapper[4958]: I1008 09:28:14.976560 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-269d466d-4666-4d00-818d-2e3ec2b0b202\") on node \"crc\" DevicePath \"\"" Oct 08 09:28:15 crc kubenswrapper[4958]: I1008 09:28:15.484574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3145cf95-11d2-48dd-aa60-f8f4dc91d130","Type":"ContainerDied","Data":"de6700f8df8cb0fa8995fee7b45dbeb2c06661c664ccac49fc4c908bf8cedd56"} Oct 08 09:28:15 crc kubenswrapper[4958]: I1008 09:28:15.484999 4958 scope.go:117] "RemoveContainer" containerID="996dbf9acb795e8dfd22760f22da69c08a8c759c537d4f1cc1a09451a135fb7c" Oct 08 09:28:15 crc kubenswrapper[4958]: I1008 09:28:15.484656 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Oct 08 09:28:15 crc kubenswrapper[4958]: I1008 09:28:15.550538 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Oct 08 09:28:15 crc kubenswrapper[4958]: I1008 09:28:15.563123 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Oct 08 09:28:15 crc kubenswrapper[4958]: I1008 09:28:15.592827 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3145cf95-11d2-48dd-aa60-f8f4dc91d130" path="/var/lib/kubelet/pods/3145cf95-11d2-48dd-aa60-f8f4dc91d130/volumes" Oct 08 09:28:21 crc kubenswrapper[4958]: I1008 09:28:21.577294 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:28:21 crc kubenswrapper[4958]: E1008 09:28:21.578717 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:28:32 crc kubenswrapper[4958]: I1008 09:28:32.577261 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:28:32 crc kubenswrapper[4958]: E1008 09:28:32.578221 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:28:43 crc kubenswrapper[4958]: I1008 09:28:43.579455 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:28:43 crc kubenswrapper[4958]: E1008 09:28:43.580777 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:28:56 crc kubenswrapper[4958]: I1008 09:28:56.576696 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:28:56 crc kubenswrapper[4958]: E1008 09:28:56.577366 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.981318 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzk4j/must-gather-st8j9"] Oct 08 09:29:05 crc kubenswrapper[4958]: E1008 09:29:05.982547 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3145cf95-11d2-48dd-aa60-f8f4dc91d130" containerName="adoption" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.982563 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3145cf95-11d2-48dd-aa60-f8f4dc91d130" containerName="adoption" Oct 08 09:29:05 crc kubenswrapper[4958]: E1008 09:29:05.982603 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="extract-content" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.982611 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="extract-content" Oct 08 09:29:05 crc kubenswrapper[4958]: E1008 09:29:05.982621 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="extract-utilities" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.982629 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="extract-utilities" Oct 08 09:29:05 crc kubenswrapper[4958]: E1008 09:29:05.982641 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="registry-server" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.982649 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="registry-server" Oct 08 09:29:05 crc kubenswrapper[4958]: E1008 09:29:05.982675 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093ebdbf-a8f2-422f-ad7f-a7893ea25990" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.982685 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="093ebdbf-a8f2-422f-ad7f-a7893ea25990" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 08 09:29:05 crc kubenswrapper[4958]: E1008 09:29:05.982704 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" containerName="adoption" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.982713 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" containerName="adoption" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.992126 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="093ebdbf-a8f2-422f-ad7f-a7893ea25990" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.992177 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3145cf95-11d2-48dd-aa60-f8f4dc91d130" containerName="adoption" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.992208 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c1fa7eb-9a27-4e66-9a81-1cfffbfb4b4e" containerName="adoption" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.992216 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="65256bd1-5b6f-4a7e-86cb-aeb0184b27ad" containerName="registry-server" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.993657 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.996161 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tzk4j/must-gather-st8j9"] Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.997595 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tzk4j"/"openshift-service-ca.crt" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.997857 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tzk4j"/"kube-root-ca.crt" Oct 08 09:29:05 crc kubenswrapper[4958]: I1008 09:29:05.997930 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tzk4j"/"default-dockercfg-hpwdr" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.194882 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2mm\" (UniqueName: \"kubernetes.io/projected/c79bde84-3cd1-43fe-86bb-2de25276e513-kube-api-access-jr2mm\") pod \"must-gather-st8j9\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.195220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c79bde84-3cd1-43fe-86bb-2de25276e513-must-gather-output\") pod \"must-gather-st8j9\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.299571 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c79bde84-3cd1-43fe-86bb-2de25276e513-must-gather-output\") pod \"must-gather-st8j9\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.299763 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2mm\" (UniqueName: \"kubernetes.io/projected/c79bde84-3cd1-43fe-86bb-2de25276e513-kube-api-access-jr2mm\") pod \"must-gather-st8j9\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.299971 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c79bde84-3cd1-43fe-86bb-2de25276e513-must-gather-output\") pod \"must-gather-st8j9\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.318170 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2mm\" (UniqueName: \"kubernetes.io/projected/c79bde84-3cd1-43fe-86bb-2de25276e513-kube-api-access-jr2mm\") pod \"must-gather-st8j9\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.331067 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:29:06 crc kubenswrapper[4958]: I1008 09:29:06.862457 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tzk4j/must-gather-st8j9"] Oct 08 09:29:07 crc kubenswrapper[4958]: I1008 09:29:07.160631 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/must-gather-st8j9" event={"ID":"c79bde84-3cd1-43fe-86bb-2de25276e513","Type":"ContainerStarted","Data":"2c8a0df466c272e58a7b2a3f83ffa50b46926c3cddb295c1051c5ce9c86f92c2"} Oct 08 09:29:07 crc kubenswrapper[4958]: I1008 09:29:07.588609 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:29:07 crc kubenswrapper[4958]: E1008 09:29:07.589281 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:29:12 crc kubenswrapper[4958]: I1008 09:29:12.249385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/must-gather-st8j9" event={"ID":"c79bde84-3cd1-43fe-86bb-2de25276e513","Type":"ContainerStarted","Data":"f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82"} Oct 08 09:29:12 crc kubenswrapper[4958]: I1008 09:29:12.250789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/must-gather-st8j9" event={"ID":"c79bde84-3cd1-43fe-86bb-2de25276e513","Type":"ContainerStarted","Data":"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790"} Oct 08 09:29:12 crc kubenswrapper[4958]: I1008 09:29:12.273490 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tzk4j/must-gather-st8j9" podStartSLOduration=2.832259312 podStartE2EDuration="7.273470304s" podCreationTimestamp="2025-10-08 09:29:05 +0000 UTC" firstStartedPulling="2025-10-08 09:29:06.870213023 +0000 UTC m=+10489.999905634" lastFinishedPulling="2025-10-08 09:29:11.311424025 +0000 UTC m=+10494.441116626" observedRunningTime="2025-10-08 09:29:12.270057701 +0000 UTC m=+10495.399750342" watchObservedRunningTime="2025-10-08 09:29:12.273470304 +0000 UTC m=+10495.403162905" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.427487 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-npwmq"] Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.430278 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.575271 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvltr\" (UniqueName: \"kubernetes.io/projected/69e25545-4799-4347-8de3-66f12d03c6c7-kube-api-access-wvltr\") pod \"crc-debug-npwmq\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.575910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69e25545-4799-4347-8de3-66f12d03c6c7-host\") pod \"crc-debug-npwmq\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.678555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvltr\" (UniqueName: \"kubernetes.io/projected/69e25545-4799-4347-8de3-66f12d03c6c7-kube-api-access-wvltr\") pod \"crc-debug-npwmq\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.678689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69e25545-4799-4347-8de3-66f12d03c6c7-host\") pod \"crc-debug-npwmq\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.678833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69e25545-4799-4347-8de3-66f12d03c6c7-host\") pod \"crc-debug-npwmq\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.698814 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvltr\" (UniqueName: \"kubernetes.io/projected/69e25545-4799-4347-8de3-66f12d03c6c7-kube-api-access-wvltr\") pod \"crc-debug-npwmq\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:16 crc kubenswrapper[4958]: I1008 09:29:16.750439 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.316656 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" event={"ID":"69e25545-4799-4347-8de3-66f12d03c6c7","Type":"ContainerStarted","Data":"1de506096114787c6e3789b6662ed0203cd9c8c060d399a76d8e372a6a947b23"} Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.384968 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8lgr5"] Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.387167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.401140 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lgr5"] Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.516859 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-utilities\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.517091 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrtqz\" (UniqueName: \"kubernetes.io/projected/907a22d2-f6c1-47e3-a1b2-04768461c35d-kube-api-access-hrtqz\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.517223 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-catalog-content\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.619539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrtqz\" (UniqueName: \"kubernetes.io/projected/907a22d2-f6c1-47e3-a1b2-04768461c35d-kube-api-access-hrtqz\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.619968 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-catalog-content\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.620204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-utilities\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.620562 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-catalog-content\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.620574 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-utilities\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.639855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrtqz\" (UniqueName: \"kubernetes.io/projected/907a22d2-f6c1-47e3-a1b2-04768461c35d-kube-api-access-hrtqz\") pod \"redhat-operators-8lgr5\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:18 crc kubenswrapper[4958]: I1008 09:29:18.726206 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:19 crc kubenswrapper[4958]: I1008 09:29:19.237184 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lgr5"] Oct 08 09:29:19 crc kubenswrapper[4958]: I1008 09:29:19.336589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerStarted","Data":"febba77876db992cff48c5b281476d60c54a14fe247c55302502f1fc6b86221a"} Oct 08 09:29:19 crc kubenswrapper[4958]: I1008 09:29:19.579043 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:29:19 crc kubenswrapper[4958]: E1008 09:29:19.579331 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:29:20 crc kubenswrapper[4958]: I1008 09:29:20.377766 4958 generic.go:334] "Generic (PLEG): container finished" podID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerID="39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5" exitCode=0 Oct 08 09:29:20 crc kubenswrapper[4958]: I1008 09:29:20.378215 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerDied","Data":"39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5"} Oct 08 09:29:22 crc kubenswrapper[4958]: I1008 09:29:22.400272 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerStarted","Data":"592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b"} Oct 08 09:29:26 crc kubenswrapper[4958]: I1008 09:29:26.443197 4958 generic.go:334] "Generic (PLEG): container finished" podID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerID="592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b" exitCode=0 Oct 08 09:29:26 crc kubenswrapper[4958]: I1008 09:29:26.443299 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerDied","Data":"592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b"} Oct 08 09:29:30 crc kubenswrapper[4958]: I1008 09:29:30.481618 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" event={"ID":"69e25545-4799-4347-8de3-66f12d03c6c7","Type":"ContainerStarted","Data":"44f70f66e3737d42cc2d83513195e2272a54f57e68bf17763abfb03e2bcc4c94"} Oct 08 09:29:30 crc kubenswrapper[4958]: I1008 09:29:30.504569 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" podStartSLOduration=1.6492205659999999 podStartE2EDuration="14.50453745s" podCreationTimestamp="2025-10-08 09:29:16 +0000 UTC" firstStartedPulling="2025-10-08 09:29:17.348020175 +0000 UTC m=+10500.477712786" lastFinishedPulling="2025-10-08 09:29:30.203337069 +0000 UTC m=+10513.333029670" observedRunningTime="2025-10-08 09:29:30.493835489 +0000 UTC m=+10513.623528130" watchObservedRunningTime="2025-10-08 09:29:30.50453745 +0000 UTC m=+10513.634230051" Oct 08 09:29:31 crc kubenswrapper[4958]: I1008 09:29:31.495912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerStarted","Data":"f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4"} Oct 08 09:29:31 crc kubenswrapper[4958]: I1008 09:29:31.529287 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8lgr5" podStartSLOduration=3.241000848 podStartE2EDuration="13.529271072s" podCreationTimestamp="2025-10-08 09:29:18 +0000 UTC" firstStartedPulling="2025-10-08 09:29:20.410103581 +0000 UTC m=+10503.539796172" lastFinishedPulling="2025-10-08 09:29:30.698373795 +0000 UTC m=+10513.828066396" observedRunningTime="2025-10-08 09:29:31.526238059 +0000 UTC m=+10514.655930660" watchObservedRunningTime="2025-10-08 09:29:31.529271072 +0000 UTC m=+10514.658963673" Oct 08 09:29:31 crc kubenswrapper[4958]: I1008 09:29:31.577589 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:29:31 crc kubenswrapper[4958]: E1008 09:29:31.577855 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:29:38 crc kubenswrapper[4958]: I1008 09:29:38.783014 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:38 crc kubenswrapper[4958]: I1008 09:29:38.783432 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:38 crc kubenswrapper[4958]: I1008 09:29:38.838559 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:39 crc kubenswrapper[4958]: I1008 09:29:39.633324 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:39 crc kubenswrapper[4958]: I1008 09:29:39.688656 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lgr5"] Oct 08 09:29:41 crc kubenswrapper[4958]: I1008 09:29:41.592088 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8lgr5" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="registry-server" containerID="cri-o://f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4" gracePeriod=2 Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.158054 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.200895 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-utilities\") pod \"907a22d2-f6c1-47e3-a1b2-04768461c35d\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.201206 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-catalog-content\") pod \"907a22d2-f6c1-47e3-a1b2-04768461c35d\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.201639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrtqz\" (UniqueName: \"kubernetes.io/projected/907a22d2-f6c1-47e3-a1b2-04768461c35d-kube-api-access-hrtqz\") pod \"907a22d2-f6c1-47e3-a1b2-04768461c35d\" (UID: \"907a22d2-f6c1-47e3-a1b2-04768461c35d\") " Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.224795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907a22d2-f6c1-47e3-a1b2-04768461c35d-kube-api-access-hrtqz" (OuterVolumeSpecName: "kube-api-access-hrtqz") pod "907a22d2-f6c1-47e3-a1b2-04768461c35d" (UID: "907a22d2-f6c1-47e3-a1b2-04768461c35d"). InnerVolumeSpecName "kube-api-access-hrtqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.236503 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-utilities" (OuterVolumeSpecName: "utilities") pod "907a22d2-f6c1-47e3-a1b2-04768461c35d" (UID: "907a22d2-f6c1-47e3-a1b2-04768461c35d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.305334 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.305365 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrtqz\" (UniqueName: \"kubernetes.io/projected/907a22d2-f6c1-47e3-a1b2-04768461c35d-kube-api-access-hrtqz\") on node \"crc\" DevicePath \"\"" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.316731 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "907a22d2-f6c1-47e3-a1b2-04768461c35d" (UID: "907a22d2-f6c1-47e3-a1b2-04768461c35d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.407564 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/907a22d2-f6c1-47e3-a1b2-04768461c35d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.601373 4958 generic.go:334] "Generic (PLEG): container finished" podID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerID="f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4" exitCode=0 Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.601418 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerDied","Data":"f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4"} Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.601423 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lgr5" Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.601446 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lgr5" event={"ID":"907a22d2-f6c1-47e3-a1b2-04768461c35d","Type":"ContainerDied","Data":"febba77876db992cff48c5b281476d60c54a14fe247c55302502f1fc6b86221a"} Oct 08 09:29:42 crc kubenswrapper[4958]: I1008 09:29:42.601465 4958 scope.go:117] "RemoveContainer" containerID="f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.418663 4958 scope.go:117] "RemoveContainer" containerID="592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.444207 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lgr5"] Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.459124 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8lgr5"] Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.576700 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:29:44 crc kubenswrapper[4958]: E1008 09:29:44.577057 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.597560 4958 scope.go:117] "RemoveContainer" containerID="39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.643507 4958 scope.go:117] "RemoveContainer" containerID="f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4" Oct 08 09:29:44 crc kubenswrapper[4958]: E1008 09:29:44.643922 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4\": container with ID starting with f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4 not found: ID does not exist" containerID="f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.644106 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4"} err="failed to get container status \"f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4\": rpc error: code = NotFound desc = could not find container \"f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4\": container with ID starting with f0754847c26e9442c606c5d13ab403ab74418863b3f754202fd83427e52308a4 not found: ID does not exist" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.644135 4958 scope.go:117] "RemoveContainer" containerID="592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b" Oct 08 09:29:44 crc kubenswrapper[4958]: E1008 09:29:44.644551 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b\": container with ID starting with 592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b not found: ID does not exist" containerID="592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.644601 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b"} err="failed to get container status \"592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b\": rpc error: code = NotFound desc = could not find container \"592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b\": container with ID starting with 592a990c0c31d41643c2017eb4493699ce780f39d44f4be092b0b64c0c7ef77b not found: ID does not exist" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.644632 4958 scope.go:117] "RemoveContainer" containerID="39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5" Oct 08 09:29:44 crc kubenswrapper[4958]: E1008 09:29:44.645008 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5\": container with ID starting with 39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5 not found: ID does not exist" containerID="39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5" Oct 08 09:29:44 crc kubenswrapper[4958]: I1008 09:29:44.645043 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5"} err="failed to get container status \"39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5\": rpc error: code = NotFound desc = could not find container \"39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5\": container with ID starting with 39f28b7dd2bd2e57cfac6c925d11a0d1f8bb10112f173dbfb0b885fc0fe727f5 not found: ID does not exist" Oct 08 09:29:45 crc kubenswrapper[4958]: I1008 09:29:45.591486 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" path="/var/lib/kubelet/pods/907a22d2-f6c1-47e3-a1b2-04768461c35d/volumes" Oct 08 09:29:58 crc kubenswrapper[4958]: I1008 09:29:58.576397 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:29:58 crc kubenswrapper[4958]: E1008 09:29:58.577302 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.175401 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6"] Oct 08 09:30:00 crc kubenswrapper[4958]: E1008 09:30:00.176899 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="extract-utilities" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.176920 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="extract-utilities" Oct 08 09:30:00 crc kubenswrapper[4958]: E1008 09:30:00.176974 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="registry-server" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.176983 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="registry-server" Oct 08 09:30:00 crc kubenswrapper[4958]: E1008 09:30:00.176998 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="extract-content" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.177006 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="extract-content" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.177490 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="907a22d2-f6c1-47e3-a1b2-04768461c35d" containerName="registry-server" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.178931 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.187694 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6"] Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.189103 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.189426 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.286222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919afe33-3558-4cf5-a0da-a2b26ac0d605-config-volume\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.286349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919afe33-3558-4cf5-a0da-a2b26ac0d605-secret-volume\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.286419 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w79h\" (UniqueName: \"kubernetes.io/projected/919afe33-3558-4cf5-a0da-a2b26ac0d605-kube-api-access-6w79h\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.389666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919afe33-3558-4cf5-a0da-a2b26ac0d605-config-volume\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.389897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919afe33-3558-4cf5-a0da-a2b26ac0d605-secret-volume\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.389986 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w79h\" (UniqueName: \"kubernetes.io/projected/919afe33-3558-4cf5-a0da-a2b26ac0d605-kube-api-access-6w79h\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.393532 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919afe33-3558-4cf5-a0da-a2b26ac0d605-config-volume\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.411159 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919afe33-3558-4cf5-a0da-a2b26ac0d605-secret-volume\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.412003 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w79h\" (UniqueName: \"kubernetes.io/projected/919afe33-3558-4cf5-a0da-a2b26ac0d605-kube-api-access-6w79h\") pod \"collect-profiles-29331930-qn7l6\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:00 crc kubenswrapper[4958]: I1008 09:30:00.519240 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:01 crc kubenswrapper[4958]: I1008 09:30:01.054504 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6"] Oct 08 09:30:01 crc kubenswrapper[4958]: I1008 09:30:01.840337 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" event={"ID":"919afe33-3558-4cf5-a0da-a2b26ac0d605","Type":"ContainerStarted","Data":"8a2441e48bdfa224991cad81672092c5658c265360915936d0f9a28f7e042e65"} Oct 08 09:30:01 crc kubenswrapper[4958]: I1008 09:30:01.840809 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" event={"ID":"919afe33-3558-4cf5-a0da-a2b26ac0d605","Type":"ContainerStarted","Data":"840e1c7e0ba337906b73e7794dba0808e3e731b7fb020d3c06c82168468d53cc"} Oct 08 09:30:01 crc kubenswrapper[4958]: I1008 09:30:01.867878 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" podStartSLOduration=1.867857571 podStartE2EDuration="1.867857571s" podCreationTimestamp="2025-10-08 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 09:30:01.865123746 +0000 UTC m=+10544.994816347" watchObservedRunningTime="2025-10-08 09:30:01.867857571 +0000 UTC m=+10544.997550172" Oct 08 09:30:02 crc kubenswrapper[4958]: I1008 09:30:02.854786 4958 generic.go:334] "Generic (PLEG): container finished" podID="919afe33-3558-4cf5-a0da-a2b26ac0d605" containerID="8a2441e48bdfa224991cad81672092c5658c265360915936d0f9a28f7e042e65" exitCode=0 Oct 08 09:30:02 crc kubenswrapper[4958]: I1008 09:30:02.854895 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" event={"ID":"919afe33-3558-4cf5-a0da-a2b26ac0d605","Type":"ContainerDied","Data":"8a2441e48bdfa224991cad81672092c5658c265360915936d0f9a28f7e042e65"} Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.289102 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.417521 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919afe33-3558-4cf5-a0da-a2b26ac0d605-config-volume\") pod \"919afe33-3558-4cf5-a0da-a2b26ac0d605\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.417635 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w79h\" (UniqueName: \"kubernetes.io/projected/919afe33-3558-4cf5-a0da-a2b26ac0d605-kube-api-access-6w79h\") pod \"919afe33-3558-4cf5-a0da-a2b26ac0d605\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.417681 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919afe33-3558-4cf5-a0da-a2b26ac0d605-secret-volume\") pod \"919afe33-3558-4cf5-a0da-a2b26ac0d605\" (UID: \"919afe33-3558-4cf5-a0da-a2b26ac0d605\") " Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.418491 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/919afe33-3558-4cf5-a0da-a2b26ac0d605-config-volume" (OuterVolumeSpecName: "config-volume") pod "919afe33-3558-4cf5-a0da-a2b26ac0d605" (UID: "919afe33-3558-4cf5-a0da-a2b26ac0d605"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.426100 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919afe33-3558-4cf5-a0da-a2b26ac0d605-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "919afe33-3558-4cf5-a0da-a2b26ac0d605" (UID: "919afe33-3558-4cf5-a0da-a2b26ac0d605"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.430206 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919afe33-3558-4cf5-a0da-a2b26ac0d605-kube-api-access-6w79h" (OuterVolumeSpecName: "kube-api-access-6w79h") pod "919afe33-3558-4cf5-a0da-a2b26ac0d605" (UID: "919afe33-3558-4cf5-a0da-a2b26ac0d605"). InnerVolumeSpecName "kube-api-access-6w79h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.519833 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919afe33-3558-4cf5-a0da-a2b26ac0d605-config-volume\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.519875 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w79h\" (UniqueName: \"kubernetes.io/projected/919afe33-3558-4cf5-a0da-a2b26ac0d605-kube-api-access-6w79h\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.519893 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919afe33-3558-4cf5-a0da-a2b26ac0d605-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.878202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" event={"ID":"919afe33-3558-4cf5-a0da-a2b26ac0d605","Type":"ContainerDied","Data":"840e1c7e0ba337906b73e7794dba0808e3e731b7fb020d3c06c82168468d53cc"} Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.878560 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840e1c7e0ba337906b73e7794dba0808e3e731b7fb020d3c06c82168468d53cc" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.878627 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29331930-qn7l6" Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.948372 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx"] Oct 08 09:30:04 crc kubenswrapper[4958]: I1008 09:30:04.962472 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29331885-d28dx"] Oct 08 09:30:05 crc kubenswrapper[4958]: I1008 09:30:05.591645 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a0c8de-83e7-47e6-9234-92c1886af049" path="/var/lib/kubelet/pods/13a0c8de-83e7-47e6-9234-92c1886af049/volumes" Oct 08 09:30:10 crc kubenswrapper[4958]: I1008 09:30:10.576666 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:30:10 crc kubenswrapper[4958]: E1008 09:30:10.577625 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.512323 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tbcjm"] Oct 08 09:30:14 crc kubenswrapper[4958]: E1008 09:30:14.513267 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919afe33-3558-4cf5-a0da-a2b26ac0d605" containerName="collect-profiles" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.513281 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="919afe33-3558-4cf5-a0da-a2b26ac0d605" containerName="collect-profiles" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.513499 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="919afe33-3558-4cf5-a0da-a2b26ac0d605" containerName="collect-profiles" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.515082 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.541278 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbcjm"] Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.635279 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-catalog-content\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.635568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnxz\" (UniqueName: \"kubernetes.io/projected/8da24d86-20dc-4037-8557-2cff4424736c-kube-api-access-hqnxz\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.635633 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-utilities\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.737288 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-catalog-content\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.737338 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnxz\" (UniqueName: \"kubernetes.io/projected/8da24d86-20dc-4037-8557-2cff4424736c-kube-api-access-hqnxz\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.737393 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-utilities\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.738041 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-utilities\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.738073 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-catalog-content\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.769105 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnxz\" (UniqueName: \"kubernetes.io/projected/8da24d86-20dc-4037-8557-2cff4424736c-kube-api-access-hqnxz\") pod \"certified-operators-tbcjm\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:14 crc kubenswrapper[4958]: I1008 09:30:14.837735 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:15 crc kubenswrapper[4958]: I1008 09:30:15.467543 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbcjm"] Oct 08 09:30:15 crc kubenswrapper[4958]: I1008 09:30:15.998028 4958 generic.go:334] "Generic (PLEG): container finished" podID="8da24d86-20dc-4037-8557-2cff4424736c" containerID="1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c" exitCode=0 Oct 08 09:30:15 crc kubenswrapper[4958]: I1008 09:30:15.998340 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerDied","Data":"1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c"} Oct 08 09:30:15 crc kubenswrapper[4958]: I1008 09:30:15.998372 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerStarted","Data":"70473704e36e7c95ec57ab05ade5919054080f4d119be6b78229351242988d35"} Oct 08 09:30:16 crc kubenswrapper[4958]: I1008 09:30:16.000876 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:30:17 crc kubenswrapper[4958]: I1008 09:30:17.014380 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerStarted","Data":"301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21"} Oct 08 09:30:18 crc kubenswrapper[4958]: I1008 09:30:18.037538 4958 generic.go:334] "Generic (PLEG): container finished" podID="8da24d86-20dc-4037-8557-2cff4424736c" containerID="301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21" exitCode=0 Oct 08 09:30:18 crc kubenswrapper[4958]: I1008 09:30:18.037979 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerDied","Data":"301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21"} Oct 08 09:30:19 crc kubenswrapper[4958]: I1008 09:30:19.053602 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerStarted","Data":"4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d"} Oct 08 09:30:19 crc kubenswrapper[4958]: I1008 09:30:19.071024 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tbcjm" podStartSLOduration=2.570003983 podStartE2EDuration="5.071006129s" podCreationTimestamp="2025-10-08 09:30:14 +0000 UTC" firstStartedPulling="2025-10-08 09:30:16.000566538 +0000 UTC m=+10559.130259159" lastFinishedPulling="2025-10-08 09:30:18.501568704 +0000 UTC m=+10561.631261305" observedRunningTime="2025-10-08 09:30:19.070123095 +0000 UTC m=+10562.199815706" watchObservedRunningTime="2025-10-08 09:30:19.071006129 +0000 UTC m=+10562.200698730" Oct 08 09:30:24 crc kubenswrapper[4958]: I1008 09:30:24.578540 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:30:24 crc kubenswrapper[4958]: E1008 09:30:24.587192 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:30:24 crc kubenswrapper[4958]: I1008 09:30:24.838732 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:24 crc kubenswrapper[4958]: I1008 09:30:24.838779 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:25 crc kubenswrapper[4958]: I1008 09:30:25.887004 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tbcjm" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="registry-server" probeResult="failure" output=< Oct 08 09:30:25 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Oct 08 09:30:25 crc kubenswrapper[4958]: > Oct 08 09:30:27 crc kubenswrapper[4958]: I1008 09:30:27.920484 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwrq5"] Oct 08 09:30:27 crc kubenswrapper[4958]: I1008 09:30:27.923440 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:27 crc kubenswrapper[4958]: I1008 09:30:27.940703 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwrq5"] Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.022360 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-catalog-content\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.022434 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsr5z\" (UniqueName: \"kubernetes.io/projected/c9c360b4-f0ca-4313-8a52-f64f663617f0-kube-api-access-wsr5z\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.022536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-utilities\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.124916 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-catalog-content\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.125014 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsr5z\" (UniqueName: \"kubernetes.io/projected/c9c360b4-f0ca-4313-8a52-f64f663617f0-kube-api-access-wsr5z\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.125105 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-utilities\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.125741 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-utilities\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.126020 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-catalog-content\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.158491 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsr5z\" (UniqueName: \"kubernetes.io/projected/c9c360b4-f0ca-4313-8a52-f64f663617f0-kube-api-access-wsr5z\") pod \"redhat-marketplace-vwrq5\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.256358 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:28 crc kubenswrapper[4958]: I1008 09:30:28.744515 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwrq5"] Oct 08 09:30:29 crc kubenswrapper[4958]: I1008 09:30:29.163830 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerID="b9f29b8c51937a1c96f4ff047f4a482c9d0448a4e38c32e6c516ee32fd45df55" exitCode=0 Oct 08 09:30:29 crc kubenswrapper[4958]: I1008 09:30:29.163923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwrq5" event={"ID":"c9c360b4-f0ca-4313-8a52-f64f663617f0","Type":"ContainerDied","Data":"b9f29b8c51937a1c96f4ff047f4a482c9d0448a4e38c32e6c516ee32fd45df55"} Oct 08 09:30:29 crc kubenswrapper[4958]: I1008 09:30:29.164177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwrq5" event={"ID":"c9c360b4-f0ca-4313-8a52-f64f663617f0","Type":"ContainerStarted","Data":"cef16cb2685e8b4a4c5a433a244dbde8f90d8e52103b9afd9e937fd0c2a96730"} Oct 08 09:30:31 crc kubenswrapper[4958]: I1008 09:30:31.190823 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerID="ad119b49c907458e9cdb42b8a72364567644f65d05d47a0736743297e0f9ce3e" exitCode=0 Oct 08 09:30:31 crc kubenswrapper[4958]: I1008 09:30:31.191018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwrq5" event={"ID":"c9c360b4-f0ca-4313-8a52-f64f663617f0","Type":"ContainerDied","Data":"ad119b49c907458e9cdb42b8a72364567644f65d05d47a0736743297e0f9ce3e"} Oct 08 09:30:33 crc kubenswrapper[4958]: I1008 09:30:33.214044 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwrq5" event={"ID":"c9c360b4-f0ca-4313-8a52-f64f663617f0","Type":"ContainerStarted","Data":"e184d43a8c09bde10bf38b0b061e332fddfc3f47a80570ffbd384095f8a277f4"} Oct 08 09:30:33 crc kubenswrapper[4958]: I1008 09:30:33.238866 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwrq5" podStartSLOduration=3.813729785 podStartE2EDuration="6.23884637s" podCreationTimestamp="2025-10-08 09:30:27 +0000 UTC" firstStartedPulling="2025-10-08 09:30:29.179529251 +0000 UTC m=+10572.309221872" lastFinishedPulling="2025-10-08 09:30:31.604645846 +0000 UTC m=+10574.734338457" observedRunningTime="2025-10-08 09:30:33.232793665 +0000 UTC m=+10576.362486266" watchObservedRunningTime="2025-10-08 09:30:33.23884637 +0000 UTC m=+10576.368538971" Oct 08 09:30:34 crc kubenswrapper[4958]: I1008 09:30:34.891363 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:34 crc kubenswrapper[4958]: I1008 09:30:34.942646 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.315143 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tbcjm"] Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.315765 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tbcjm" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="registry-server" containerID="cri-o://4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d" gracePeriod=2 Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.844231 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.912825 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-utilities\") pod \"8da24d86-20dc-4037-8557-2cff4424736c\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.912874 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-catalog-content\") pod \"8da24d86-20dc-4037-8557-2cff4424736c\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.913050 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqnxz\" (UniqueName: \"kubernetes.io/projected/8da24d86-20dc-4037-8557-2cff4424736c-kube-api-access-hqnxz\") pod \"8da24d86-20dc-4037-8557-2cff4424736c\" (UID: \"8da24d86-20dc-4037-8557-2cff4424736c\") " Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.913466 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-utilities" (OuterVolumeSpecName: "utilities") pod "8da24d86-20dc-4037-8557-2cff4424736c" (UID: "8da24d86-20dc-4037-8557-2cff4424736c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.914997 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.921621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da24d86-20dc-4037-8557-2cff4424736c-kube-api-access-hqnxz" (OuterVolumeSpecName: "kube-api-access-hqnxz") pod "8da24d86-20dc-4037-8557-2cff4424736c" (UID: "8da24d86-20dc-4037-8557-2cff4424736c"). InnerVolumeSpecName "kube-api-access-hqnxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:30:36 crc kubenswrapper[4958]: I1008 09:30:36.957569 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8da24d86-20dc-4037-8557-2cff4424736c" (UID: "8da24d86-20dc-4037-8557-2cff4424736c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.015881 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da24d86-20dc-4037-8557-2cff4424736c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.015908 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqnxz\" (UniqueName: \"kubernetes.io/projected/8da24d86-20dc-4037-8557-2cff4424736c-kube-api-access-hqnxz\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.250983 4958 generic.go:334] "Generic (PLEG): container finished" podID="8da24d86-20dc-4037-8557-2cff4424736c" containerID="4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d" exitCode=0 Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.251053 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbcjm" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.251056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerDied","Data":"4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d"} Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.251388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbcjm" event={"ID":"8da24d86-20dc-4037-8557-2cff4424736c","Type":"ContainerDied","Data":"70473704e36e7c95ec57ab05ade5919054080f4d119be6b78229351242988d35"} Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.251409 4958 scope.go:117] "RemoveContainer" containerID="4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.291166 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tbcjm"] Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.292538 4958 scope.go:117] "RemoveContainer" containerID="301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.301199 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tbcjm"] Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.353705 4958 scope.go:117] "RemoveContainer" containerID="1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.421225 4958 scope.go:117] "RemoveContainer" containerID="4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d" Oct 08 09:30:37 crc kubenswrapper[4958]: E1008 09:30:37.421781 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d\": container with ID starting with 4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d not found: ID does not exist" containerID="4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.421822 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d"} err="failed to get container status \"4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d\": rpc error: code = NotFound desc = could not find container \"4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d\": container with ID starting with 4cdf827e7755957f8fec9dba7001705b8ab1dad5a140023ae9a95c8525707e7d not found: ID does not exist" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.421851 4958 scope.go:117] "RemoveContainer" containerID="301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21" Oct 08 09:30:37 crc kubenswrapper[4958]: E1008 09:30:37.423554 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21\": container with ID starting with 301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21 not found: ID does not exist" containerID="301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.423576 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21"} err="failed to get container status \"301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21\": rpc error: code = NotFound desc = could not find container \"301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21\": container with ID starting with 301a872f01ff0916215be48124e5464543b2e82fa12d79815db6c223f5a07c21 not found: ID does not exist" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.423589 4958 scope.go:117] "RemoveContainer" containerID="1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c" Oct 08 09:30:37 crc kubenswrapper[4958]: E1008 09:30:37.423948 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c\": container with ID starting with 1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c not found: ID does not exist" containerID="1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.424008 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c"} err="failed to get container status \"1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c\": rpc error: code = NotFound desc = could not find container \"1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c\": container with ID starting with 1f2001bb8b7cafb299f0413cddfe7910a099c7d555682ae0a1ad719b4763d09c not found: ID does not exist" Oct 08 09:30:37 crc kubenswrapper[4958]: I1008 09:30:37.592445 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da24d86-20dc-4037-8557-2cff4424736c" path="/var/lib/kubelet/pods/8da24d86-20dc-4037-8557-2cff4424736c/volumes" Oct 08 09:30:38 crc kubenswrapper[4958]: I1008 09:30:38.256966 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:38 crc kubenswrapper[4958]: I1008 09:30:38.257678 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:38 crc kubenswrapper[4958]: I1008 09:30:38.330473 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:38 crc kubenswrapper[4958]: I1008 09:30:38.577286 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:30:38 crc kubenswrapper[4958]: E1008 09:30:38.577634 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:30:39 crc kubenswrapper[4958]: I1008 09:30:39.714526 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:40 crc kubenswrapper[4958]: I1008 09:30:40.516625 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwrq5"] Oct 08 09:30:42 crc kubenswrapper[4958]: I1008 09:30:42.314136 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwrq5" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="registry-server" containerID="cri-o://e184d43a8c09bde10bf38b0b061e332fddfc3f47a80570ffbd384095f8a277f4" gracePeriod=2 Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.326274 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerID="e184d43a8c09bde10bf38b0b061e332fddfc3f47a80570ffbd384095f8a277f4" exitCode=0 Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.326421 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwrq5" event={"ID":"c9c360b4-f0ca-4313-8a52-f64f663617f0","Type":"ContainerDied","Data":"e184d43a8c09bde10bf38b0b061e332fddfc3f47a80570ffbd384095f8a277f4"} Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.326874 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwrq5" event={"ID":"c9c360b4-f0ca-4313-8a52-f64f663617f0","Type":"ContainerDied","Data":"cef16cb2685e8b4a4c5a433a244dbde8f90d8e52103b9afd9e937fd0c2a96730"} Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.326890 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef16cb2685e8b4a4c5a433a244dbde8f90d8e52103b9afd9e937fd0c2a96730" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.356153 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.558804 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsr5z\" (UniqueName: \"kubernetes.io/projected/c9c360b4-f0ca-4313-8a52-f64f663617f0-kube-api-access-wsr5z\") pod \"c9c360b4-f0ca-4313-8a52-f64f663617f0\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.559120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-catalog-content\") pod \"c9c360b4-f0ca-4313-8a52-f64f663617f0\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.566686 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c360b4-f0ca-4313-8a52-f64f663617f0-kube-api-access-wsr5z" (OuterVolumeSpecName: "kube-api-access-wsr5z") pod "c9c360b4-f0ca-4313-8a52-f64f663617f0" (UID: "c9c360b4-f0ca-4313-8a52-f64f663617f0"). InnerVolumeSpecName "kube-api-access-wsr5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.573140 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-utilities\") pod \"c9c360b4-f0ca-4313-8a52-f64f663617f0\" (UID: \"c9c360b4-f0ca-4313-8a52-f64f663617f0\") " Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.574641 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsr5z\" (UniqueName: \"kubernetes.io/projected/c9c360b4-f0ca-4313-8a52-f64f663617f0-kube-api-access-wsr5z\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.578543 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-utilities" (OuterVolumeSpecName: "utilities") pod "c9c360b4-f0ca-4313-8a52-f64f663617f0" (UID: "c9c360b4-f0ca-4313-8a52-f64f663617f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.597211 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9c360b4-f0ca-4313-8a52-f64f663617f0" (UID: "c9c360b4-f0ca-4313-8a52-f64f663617f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.677182 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:43 crc kubenswrapper[4958]: I1008 09:30:43.677223 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c360b4-f0ca-4313-8a52-f64f663617f0-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:30:44 crc kubenswrapper[4958]: I1008 09:30:44.337448 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwrq5" Oct 08 09:30:44 crc kubenswrapper[4958]: I1008 09:30:44.374687 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwrq5"] Oct 08 09:30:44 crc kubenswrapper[4958]: I1008 09:30:44.385353 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwrq5"] Oct 08 09:30:45 crc kubenswrapper[4958]: I1008 09:30:45.589151 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" path="/var/lib/kubelet/pods/c9c360b4-f0ca-4313-8a52-f64f663617f0/volumes" Oct 08 09:30:50 crc kubenswrapper[4958]: I1008 09:30:50.561563 4958 scope.go:117] "RemoveContainer" containerID="9df52c463d4d5f7ec42af5148e4b00638d5f03c81fb2e6213f09a45fc5243482" Oct 08 09:30:52 crc kubenswrapper[4958]: I1008 09:30:52.578065 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:30:52 crc kubenswrapper[4958]: E1008 09:30:52.578929 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:30:52 crc kubenswrapper[4958]: I1008 09:30:52.850910 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0debc9bd-75fc-4150-aece-ac82305e6847/init-config-reloader/0.log" Oct 08 09:30:53 crc kubenswrapper[4958]: I1008 09:30:53.019993 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0debc9bd-75fc-4150-aece-ac82305e6847/init-config-reloader/0.log" Oct 08 09:30:53 crc kubenswrapper[4958]: I1008 09:30:53.143805 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0debc9bd-75fc-4150-aece-ac82305e6847/alertmanager/0.log" Oct 08 09:30:53 crc kubenswrapper[4958]: I1008 09:30:53.214756 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0debc9bd-75fc-4150-aece-ac82305e6847/config-reloader/0.log" Oct 08 09:30:54 crc kubenswrapper[4958]: I1008 09:30:54.334511 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_94046b33-1212-42f7-a71e-c26cfbcf3815/aodh-api/0.log" Oct 08 09:30:54 crc kubenswrapper[4958]: I1008 09:30:54.347451 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_94046b33-1212-42f7-a71e-c26cfbcf3815/aodh-evaluator/0.log" Oct 08 09:30:54 crc kubenswrapper[4958]: I1008 09:30:54.521667 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_94046b33-1212-42f7-a71e-c26cfbcf3815/aodh-listener/0.log" Oct 08 09:30:54 crc kubenswrapper[4958]: I1008 09:30:54.547252 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_94046b33-1212-42f7-a71e-c26cfbcf3815/aodh-notifier/0.log" Oct 08 09:30:54 crc kubenswrapper[4958]: I1008 09:30:54.800460 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6987778d5b-jzqd7_678666be-bd10-482f-81e0-d809c4ce5862/barbican-api/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.013985 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6987778d5b-jzqd7_678666be-bd10-482f-81e0-d809c4ce5862/barbican-api-log/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.132212 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b48b74b7d-8p6dz_267d0d72-2044-4c4e-81bd-2f555587acad/barbican-keystone-listener/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.223364 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b48b74b7d-8p6dz_267d0d72-2044-4c4e-81bd-2f555587acad/barbican-keystone-listener-log/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.334629 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bf5f9d487-99dwt_b37c31c2-c6b4-462f-ba30-7431cacff0dc/barbican-worker/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.411688 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bf5f9d487-99dwt_b37c31c2-c6b4-462f-ba30-7431cacff0dc/barbican-worker-log/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.545593 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-8bxd8_3d9330f6-7990-462e-b85f-946c1255f76c/bootstrap-openstack-openstack-cell1/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.728618 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4d3f4216-471c-41b2-ae15-4b1be93e0d9e/ceilometer-central-agent/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.833301 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4d3f4216-471c-41b2-ae15-4b1be93e0d9e/ceilometer-notification-agent/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.885843 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4d3f4216-471c-41b2-ae15-4b1be93e0d9e/proxy-httpd/0.log" Oct 08 09:30:55 crc kubenswrapper[4958]: I1008 09:30:55.914544 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4d3f4216-471c-41b2-ae15-4b1be93e0d9e/sg-core/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.120632 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6148a060-757f-4ab5-8cf2-e3f378a6ce59/cinder-api-log/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.171217 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6148a060-757f-4ab5-8cf2-e3f378a6ce59/cinder-api/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.279873 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e0aacf99-afe9-4f82-aabe-91eca23794e2/cinder-scheduler/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.430329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e0aacf99-afe9-4f82-aabe-91eca23794e2/probe/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.541346 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-xl85k_b3ecaaf8-2e47-4f83-b958-d53fac1a6316/configure-network-openstack-openstack-cell1/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.741973 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-lf2pg_5489b57b-58cd-4678-aa3e-6cda0bc13926/configure-os-openstack-openstack-cell1/0.log" Oct 08 09:30:56 crc kubenswrapper[4958]: I1008 09:30:56.906622 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-599688d56f-pv87c_8e1183c9-43a6-4761-96ee-e8edd06a074b/init/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.055126 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-599688d56f-pv87c_8e1183c9-43a6-4761-96ee-e8edd06a074b/init/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.090793 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-599688d56f-pv87c_8e1183c9-43a6-4761-96ee-e8edd06a074b/dnsmasq-dns/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.244103 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-fthbc_19cd175f-7673-40a6-96d9-c34c47158241/download-cache-openstack-openstack-cell1/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.384252 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a60cd39-56e8-4e32-845e-1a931ade509b/glance-httpd/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.455393 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6a60cd39-56e8-4e32-845e-1a931ade509b/glance-log/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.795253 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1/glance-httpd/0.log" Oct 08 09:30:57 crc kubenswrapper[4958]: I1008 09:30:57.841115 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4b9f9f15-05df-4a08-ad5c-c7f1f4da6df1/glance-log/0.log" Oct 08 09:30:58 crc kubenswrapper[4958]: I1008 09:30:58.620512 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-85bf9f6694-scp7q_09f35596-ca09-49f1-8b9d-7244e7cc1ebc/heat-api/0.log" Oct 08 09:30:58 crc kubenswrapper[4958]: I1008 09:30:58.693238 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6d9898f7cb-8bmkv_305b9d6b-591e-41e2-82a1-4fa6053b4f45/heat-engine/0.log" Oct 08 09:30:58 crc kubenswrapper[4958]: I1008 09:30:58.813174 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-67d857f9d-4g9rh_f5230e1f-f04d-4b47-9fde-33aa1f60ac05/heat-cfnapi/0.log" Oct 08 09:30:58 crc kubenswrapper[4958]: I1008 09:30:58.923670 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5db9c89c9d-78bjw_9be8fb64-fc3c-4059-8e03-7a0b58cb30d4/horizon/0.log" Oct 08 09:30:59 crc kubenswrapper[4958]: I1008 09:30:59.197983 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-l6x9j_836d37ec-3c7c-406c-85f5-0f825f3ea402/install-certs-openstack-openstack-cell1/0.log" Oct 08 09:30:59 crc kubenswrapper[4958]: I1008 09:30:59.458983 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-l4sgd_566119a8-2995-4435-a0de-fba57da4718c/install-os-openstack-openstack-cell1/0.log" Oct 08 09:30:59 crc kubenswrapper[4958]: I1008 09:30:59.622252 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5db9c89c9d-78bjw_9be8fb64-fc3c-4059-8e03-7a0b58cb30d4/horizon-log/0.log" Oct 08 09:30:59 crc kubenswrapper[4958]: I1008 09:30:59.806735 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-64cb5f589-zm624_8e658785-9f48-4371-bbb5-2122dc1bebb3/keystone-api/0.log" Oct 08 09:30:59 crc kubenswrapper[4958]: I1008 09:30:59.918198 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29331901-zk49x_e230b221-1e6c-4ce3-8bfa-ed276364b4f9/keystone-cron/0.log" Oct 08 09:31:00 crc kubenswrapper[4958]: I1008 09:31:00.104237 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dc1df803-110b-4f26-9e90-2c727d9bc1fa/kube-state-metrics/0.log" Oct 08 09:31:00 crc kubenswrapper[4958]: I1008 09:31:00.248301 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-pndgl_fde2540a-73ef-4a3b-8efd-b7b6331fa25b/libvirt-openstack-openstack-cell1/0.log" Oct 08 09:31:00 crc kubenswrapper[4958]: I1008 09:31:00.685272 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695f697977-q27xm_8a5c8c02-dcde-4254-8285-2e88e7ba9e6b/neutron-api/0.log" Oct 08 09:31:00 crc kubenswrapper[4958]: I1008 09:31:00.800629 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-695f697977-q27xm_8a5c8c02-dcde-4254-8285-2e88e7ba9e6b/neutron-httpd/0.log" Oct 08 09:31:01 crc kubenswrapper[4958]: I1008 09:31:01.062292 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-xsjwr_bda6e9bf-819f-47bc-b19c-06c5aea5f9d4/neutron-dhcp-openstack-openstack-cell1/0.log" Oct 08 09:31:01 crc kubenswrapper[4958]: I1008 09:31:01.326473 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-529p8_241e113b-bf3e-4beb-8ee1-ea9a67dd1ebb/neutron-metadata-openstack-openstack-cell1/0.log" Oct 08 09:31:01 crc kubenswrapper[4958]: I1008 09:31:01.635964 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-nzcg9_e671c885-ccd0-4167-ab0e-434aea504b10/neutron-sriov-openstack-openstack-cell1/0.log" Oct 08 09:31:02 crc kubenswrapper[4958]: I1008 09:31:02.003117 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6361d6b6-9daa-426e-9fad-bfb1e4cf53ec/nova-api-api/0.log" Oct 08 09:31:02 crc kubenswrapper[4958]: I1008 09:31:02.151502 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6361d6b6-9daa-426e-9fad-bfb1e4cf53ec/nova-api-log/0.log" Oct 08 09:31:02 crc kubenswrapper[4958]: I1008 09:31:02.479422 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f7af0860-a4f4-4d5b-9c7c-a9df3ab6d44c/nova-cell0-conductor-conductor/0.log" Oct 08 09:31:02 crc kubenswrapper[4958]: I1008 09:31:02.818614 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_92d09715-fb73-466c-af3c-ca0d1aca203f/nova-cell1-conductor-conductor/0.log" Oct 08 09:31:03 crc kubenswrapper[4958]: I1008 09:31:03.126298 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fc45e18e-9e6a-4fe4-bab8-0f6384fb29e0/nova-cell1-novncproxy-novncproxy/0.log" Oct 08 09:31:03 crc kubenswrapper[4958]: I1008 09:31:03.487583 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvq4sr_093ebdbf-a8f2-422f-ad7f-a7893ea25990/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Oct 08 09:31:03 crc kubenswrapper[4958]: I1008 09:31:03.811295 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-7qrs9_6698d0f1-ad89-4315-9970-fbe6b2b9a7e9/nova-cell1-openstack-openstack-cell1/0.log" Oct 08 09:31:04 crc kubenswrapper[4958]: I1008 09:31:04.027122 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a42f13a8-39ee-400a-8bfa-1a3c9eeb739f/nova-metadata-log/0.log" Oct 08 09:31:04 crc kubenswrapper[4958]: I1008 09:31:04.486065 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a42f13a8-39ee-400a-8bfa-1a3c9eeb739f/nova-metadata-metadata/0.log" Oct 08 09:31:04 crc kubenswrapper[4958]: I1008 09:31:04.626675 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6a3c0020-8fcd-4c3e-a8c7-226f10f5373e/nova-scheduler-scheduler/0.log" Oct 08 09:31:04 crc kubenswrapper[4958]: I1008 09:31:04.886752 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c7c5889cc-cf95j_b0ce43a6-91ba-4dd4-8d78-f8f70ff49236/init/0.log" Oct 08 09:31:05 crc kubenswrapper[4958]: I1008 09:31:05.026977 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c7c5889cc-cf95j_b0ce43a6-91ba-4dd4-8d78-f8f70ff49236/init/0.log" Oct 08 09:31:05 crc kubenswrapper[4958]: I1008 09:31:05.226570 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c7c5889cc-cf95j_b0ce43a6-91ba-4dd4-8d78-f8f70ff49236/octavia-api-provider-agent/0.log" Oct 08 09:31:05 crc kubenswrapper[4958]: I1008 09:31:05.385275 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6c7c5889cc-cf95j_b0ce43a6-91ba-4dd4-8d78-f8f70ff49236/octavia-api/0.log" Oct 08 09:31:05 crc kubenswrapper[4958]: I1008 09:31:05.606417 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jdcv4_e9500a2f-96a7-47bb-a498-d0b695ae541f/init/0.log" Oct 08 09:31:05 crc kubenswrapper[4958]: I1008 09:31:05.766329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jdcv4_e9500a2f-96a7-47bb-a498-d0b695ae541f/init/0.log" Oct 08 09:31:05 crc kubenswrapper[4958]: I1008 09:31:05.972021 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-jdcv4_e9500a2f-96a7-47bb-a498-d0b695ae541f/octavia-healthmanager/0.log" Oct 08 09:31:06 crc kubenswrapper[4958]: I1008 09:31:06.214075 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-qcctv_2a4902a5-b947-44c9-a2f0-c97c277e6899/init/0.log" Oct 08 09:31:06 crc kubenswrapper[4958]: I1008 09:31:06.414249 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-qcctv_2a4902a5-b947-44c9-a2f0-c97c277e6899/init/0.log" Oct 08 09:31:06 crc kubenswrapper[4958]: I1008 09:31:06.515292 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-qcctv_2a4902a5-b947-44c9-a2f0-c97c277e6899/octavia-housekeeping/0.log" Oct 08 09:31:06 crc kubenswrapper[4958]: I1008 09:31:06.576717 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:31:06 crc kubenswrapper[4958]: E1008 09:31:06.577139 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:31:06 crc kubenswrapper[4958]: I1008 09:31:06.723654 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-678599687f-7w6sz_efb7eef5-76f6-43d9-946f-d0e6091ed0da/init/0.log" Oct 08 09:31:06 crc kubenswrapper[4958]: I1008 09:31:06.988321 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-678599687f-7w6sz_efb7eef5-76f6-43d9-946f-d0e6091ed0da/octavia-amphora-httpd/0.log" Oct 08 09:31:07 crc kubenswrapper[4958]: I1008 09:31:07.015788 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-678599687f-7w6sz_efb7eef5-76f6-43d9-946f-d0e6091ed0da/init/0.log" Oct 08 09:31:07 crc kubenswrapper[4958]: I1008 09:31:07.234002 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-9nlmv_0f1e2c22-078a-46e8-806f-db4f89afee77/init/0.log" Oct 08 09:31:07 crc kubenswrapper[4958]: I1008 09:31:07.544291 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-9nlmv_0f1e2c22-078a-46e8-806f-db4f89afee77/init/0.log" Oct 08 09:31:07 crc kubenswrapper[4958]: I1008 09:31:07.578847 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-9nlmv_0f1e2c22-078a-46e8-806f-db4f89afee77/octavia-rsyslog/0.log" Oct 08 09:31:07 crc kubenswrapper[4958]: I1008 09:31:07.738968 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-dsclk_fe515cb3-71bb-4ce8-affe-db0501826bce/init/0.log" Oct 08 09:31:07 crc kubenswrapper[4958]: I1008 09:31:07.965677 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-dsclk_fe515cb3-71bb-4ce8-affe-db0501826bce/init/0.log" Oct 08 09:31:08 crc kubenswrapper[4958]: I1008 09:31:08.277761 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-dsclk_fe515cb3-71bb-4ce8-affe-db0501826bce/octavia-worker/0.log" Oct 08 09:31:08 crc kubenswrapper[4958]: I1008 09:31:08.475881 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_eadd9987-ad00-49d2-be85-da766dc46f50/mysql-bootstrap/0.log" Oct 08 09:31:08 crc kubenswrapper[4958]: I1008 09:31:08.603985 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_eadd9987-ad00-49d2-be85-da766dc46f50/mysql-bootstrap/0.log" Oct 08 09:31:08 crc kubenswrapper[4958]: I1008 09:31:08.671064 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_eadd9987-ad00-49d2-be85-da766dc46f50/galera/0.log" Oct 08 09:31:08 crc kubenswrapper[4958]: I1008 09:31:08.852935 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf3d8da5-1056-41dd-b7d3-2c17a02dde4b/mysql-bootstrap/0.log" Oct 08 09:31:09 crc kubenswrapper[4958]: I1008 09:31:09.108730 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf3d8da5-1056-41dd-b7d3-2c17a02dde4b/mysql-bootstrap/0.log" Oct 08 09:31:09 crc kubenswrapper[4958]: I1008 09:31:09.119664 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf3d8da5-1056-41dd-b7d3-2c17a02dde4b/galera/0.log" Oct 08 09:31:09 crc kubenswrapper[4958]: I1008 09:31:09.345384 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2f04c432-37a2-4652-b37e-3f9c58b67b89/openstackclient/0.log" Oct 08 09:31:09 crc kubenswrapper[4958]: I1008 09:31:09.536562 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6tbx2_12392965-0a62-4b97-bd6f-fb401006313c/ovn-controller/0.log" Oct 08 09:31:09 crc kubenswrapper[4958]: I1008 09:31:09.812098 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-shpqz_744988e9-f0b2-4f8e-a7d8-ba176dbba150/openstack-network-exporter/0.log" Oct 08 09:31:10 crc kubenswrapper[4958]: I1008 09:31:10.031436 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hrj56_96db30c5-800d-4bf0-afb7-5c67001f8382/ovsdb-server-init/0.log" Oct 08 09:31:10 crc kubenswrapper[4958]: I1008 09:31:10.260066 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hrj56_96db30c5-800d-4bf0-afb7-5c67001f8382/ovsdb-server-init/0.log" Oct 08 09:31:10 crc kubenswrapper[4958]: I1008 09:31:10.278126 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hrj56_96db30c5-800d-4bf0-afb7-5c67001f8382/ovs-vswitchd/0.log" Oct 08 09:31:10 crc kubenswrapper[4958]: I1008 09:31:10.444617 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hrj56_96db30c5-800d-4bf0-afb7-5c67001f8382/ovsdb-server/0.log" Oct 08 09:31:10 crc kubenswrapper[4958]: I1008 09:31:10.675012 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9697b696-83b6-40de-a443-9705c0475f3c/openstack-network-exporter/0.log" Oct 08 09:31:10 crc kubenswrapper[4958]: I1008 09:31:10.751266 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9697b696-83b6-40de-a443-9705c0475f3c/ovn-northd/0.log" Oct 08 09:31:11 crc kubenswrapper[4958]: I1008 09:31:11.027182 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-fxgb2_a1d1afc2-f4df-4102-9be5-a39b40b9ee65/ovn-openstack-openstack-cell1/0.log" Oct 08 09:31:11 crc kubenswrapper[4958]: I1008 09:31:11.223560 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2/openstack-network-exporter/0.log" Oct 08 09:31:11 crc kubenswrapper[4958]: I1008 09:31:11.422103 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cd4d1e0b-93a4-41e6-8fdf-7760cd9f23b2/ovsdbserver-nb/0.log" Oct 08 09:31:11 crc kubenswrapper[4958]: I1008 09:31:11.853393 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_5ab7032f-3737-4c89-a704-0c11294bcdcd/openstack-network-exporter/0.log" Oct 08 09:31:11 crc kubenswrapper[4958]: I1008 09:31:11.944072 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_5ab7032f-3737-4c89-a704-0c11294bcdcd/ovsdbserver-nb/0.log" Oct 08 09:31:12 crc kubenswrapper[4958]: I1008 09:31:12.179907 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_68d767b5-82f4-4695-943c-1553272efff6/openstack-network-exporter/0.log" Oct 08 09:31:12 crc kubenswrapper[4958]: I1008 09:31:12.354314 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_68d767b5-82f4-4695-943c-1553272efff6/ovsdbserver-nb/0.log" Oct 08 09:31:12 crc kubenswrapper[4958]: I1008 09:31:12.543871 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8ae295c4-128b-476d-801e-5b04f1a1eb58/openstack-network-exporter/0.log" Oct 08 09:31:12 crc kubenswrapper[4958]: I1008 09:31:12.682591 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8ae295c4-128b-476d-801e-5b04f1a1eb58/ovsdbserver-sb/0.log" Oct 08 09:31:12 crc kubenswrapper[4958]: I1008 09:31:12.908282 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_61178e97-c8df-469d-997f-753bce1d600e/openstack-network-exporter/0.log" Oct 08 09:31:13 crc kubenswrapper[4958]: I1008 09:31:13.004022 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_61178e97-c8df-469d-997f-753bce1d600e/ovsdbserver-sb/0.log" Oct 08 09:31:13 crc kubenswrapper[4958]: I1008 09:31:13.211254 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_3274cda4-e5f8-4bf5-af02-d269282b98d8/openstack-network-exporter/0.log" Oct 08 09:31:13 crc kubenswrapper[4958]: I1008 09:31:13.402932 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_3274cda4-e5f8-4bf5-af02-d269282b98d8/ovsdbserver-sb/0.log" Oct 08 09:31:13 crc kubenswrapper[4958]: I1008 09:31:13.675639 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9c4cb9494-k7nws_d7d29e82-7666-4eec-bb3f-e3b15e01740d/placement-api/0.log" Oct 08 09:31:13 crc kubenswrapper[4958]: I1008 09:31:13.743441 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9c4cb9494-k7nws_d7d29e82-7666-4eec-bb3f-e3b15e01740d/placement-log/0.log" Oct 08 09:31:13 crc kubenswrapper[4958]: I1008 09:31:13.977078 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c9kcsp_c58f658e-42ed-4353-9aad-b3042bfaf65f/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Oct 08 09:31:14 crc kubenswrapper[4958]: I1008 09:31:14.256944 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_820b139e-c192-4613-b18d-64a3ec276dae/init-config-reloader/0.log" Oct 08 09:31:14 crc kubenswrapper[4958]: I1008 09:31:14.432097 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_820b139e-c192-4613-b18d-64a3ec276dae/config-reloader/0.log" Oct 08 09:31:14 crc kubenswrapper[4958]: I1008 09:31:14.458353 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_820b139e-c192-4613-b18d-64a3ec276dae/init-config-reloader/0.log" Oct 08 09:31:14 crc kubenswrapper[4958]: I1008 09:31:14.655091 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_820b139e-c192-4613-b18d-64a3ec276dae/prometheus/0.log" Oct 08 09:31:14 crc kubenswrapper[4958]: I1008 09:31:14.678279 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_820b139e-c192-4613-b18d-64a3ec276dae/thanos-sidecar/0.log" Oct 08 09:31:14 crc kubenswrapper[4958]: I1008 09:31:14.830789 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_62ba1db3-cb42-437b-bc67-08521afcd2d2/setup-container/0.log" Oct 08 09:31:15 crc kubenswrapper[4958]: I1008 09:31:15.062142 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_62ba1db3-cb42-437b-bc67-08521afcd2d2/setup-container/0.log" Oct 08 09:31:15 crc kubenswrapper[4958]: I1008 09:31:15.170736 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_62ba1db3-cb42-437b-bc67-08521afcd2d2/rabbitmq/0.log" Oct 08 09:31:15 crc kubenswrapper[4958]: I1008 09:31:15.368264 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3d0e354-e762-4a61-b317-93aad016cc26/setup-container/0.log" Oct 08 09:31:15 crc kubenswrapper[4958]: I1008 09:31:15.549056 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3d0e354-e762-4a61-b317-93aad016cc26/setup-container/0.log" Oct 08 09:31:15 crc kubenswrapper[4958]: I1008 09:31:15.691288 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3d0e354-e762-4a61-b317-93aad016cc26/rabbitmq/0.log" Oct 08 09:31:16 crc kubenswrapper[4958]: I1008 09:31:16.603124 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-958gs_1a10c7d3-c479-4432-9d5c-89b4c5eae5ae/reboot-os-openstack-openstack-cell1/0.log" Oct 08 09:31:16 crc kubenswrapper[4958]: I1008 09:31:16.848150 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-dnxt5_b134e035-bf32-4f2f-a14f-24105e9bcb87/run-os-openstack-openstack-cell1/0.log" Oct 08 09:31:17 crc kubenswrapper[4958]: I1008 09:31:17.076388 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-r9wb9_adf74cc2-101a-479f-bcbb-9b298a3875bc/ssh-known-hosts-openstack/0.log" Oct 08 09:31:17 crc kubenswrapper[4958]: I1008 09:31:17.466262 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6cc64ff97c-4bpv7_1522b4e2-e26c-43ce-ab2c-eb31043b4da7/proxy-server/0.log" Oct 08 09:31:17 crc kubenswrapper[4958]: I1008 09:31:17.486710 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6cc64ff97c-4bpv7_1522b4e2-e26c-43ce-ab2c-eb31043b4da7/proxy-httpd/0.log" Oct 08 09:31:17 crc kubenswrapper[4958]: I1008 09:31:17.671248 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zz6vr_2f483059-1c2c-445a-ad16-0ba2c7c6abc4/swift-ring-rebalance/0.log" Oct 08 09:31:17 crc kubenswrapper[4958]: I1008 09:31:17.988546 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-5ggmb_37c235bc-99b8-41ad-a6e5-e735be206363/telemetry-openstack-openstack-cell1/0.log" Oct 08 09:31:18 crc kubenswrapper[4958]: I1008 09:31:18.472439 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-8hs4v_5b4bc8aa-7a0e-4767-a6d0-2b862d7cbf4b/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Oct 08 09:31:18 crc kubenswrapper[4958]: I1008 09:31:18.773126 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-jds6t_f1189674-59af-4819-9d1e-024fc8c0d457/validate-network-openstack-openstack-cell1/0.log" Oct 08 09:31:18 crc kubenswrapper[4958]: I1008 09:31:18.829745 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_acdf3e24-be4c-4a93-b803-a80604b2b4c2/memcached/0.log" Oct 08 09:31:20 crc kubenswrapper[4958]: I1008 09:31:20.577382 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:31:20 crc kubenswrapper[4958]: E1008 09:31:20.578200 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:31:32 crc kubenswrapper[4958]: I1008 09:31:32.577891 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:31:32 crc kubenswrapper[4958]: E1008 09:31:32.578835 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:31:45 crc kubenswrapper[4958]: I1008 09:31:45.581042 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:31:46 crc kubenswrapper[4958]: I1008 09:31:46.003263 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"94430232a247272208346ced4af30d22095fbb6942a04400ce4d86025bf1c385"} Oct 08 09:31:57 crc kubenswrapper[4958]: I1008 09:31:57.153486 4958 generic.go:334] "Generic (PLEG): container finished" podID="69e25545-4799-4347-8de3-66f12d03c6c7" containerID="44f70f66e3737d42cc2d83513195e2272a54f57e68bf17763abfb03e2bcc4c94" exitCode=0 Oct 08 09:31:57 crc kubenswrapper[4958]: I1008 09:31:57.153539 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" event={"ID":"69e25545-4799-4347-8de3-66f12d03c6c7","Type":"ContainerDied","Data":"44f70f66e3737d42cc2d83513195e2272a54f57e68bf17763abfb03e2bcc4c94"} Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.311326 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.330239 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvltr\" (UniqueName: \"kubernetes.io/projected/69e25545-4799-4347-8de3-66f12d03c6c7-kube-api-access-wvltr\") pod \"69e25545-4799-4347-8de3-66f12d03c6c7\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.330353 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69e25545-4799-4347-8de3-66f12d03c6c7-host\") pod \"69e25545-4799-4347-8de3-66f12d03c6c7\" (UID: \"69e25545-4799-4347-8de3-66f12d03c6c7\") " Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.330467 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69e25545-4799-4347-8de3-66f12d03c6c7-host" (OuterVolumeSpecName: "host") pod "69e25545-4799-4347-8de3-66f12d03c6c7" (UID: "69e25545-4799-4347-8de3-66f12d03c6c7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.331674 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69e25545-4799-4347-8de3-66f12d03c6c7-host\") on node \"crc\" DevicePath \"\"" Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.341177 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e25545-4799-4347-8de3-66f12d03c6c7-kube-api-access-wvltr" (OuterVolumeSpecName: "kube-api-access-wvltr") pod "69e25545-4799-4347-8de3-66f12d03c6c7" (UID: "69e25545-4799-4347-8de3-66f12d03c6c7"). InnerVolumeSpecName "kube-api-access-wvltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.360821 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-npwmq"] Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.373771 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-npwmq"] Oct 08 09:31:58 crc kubenswrapper[4958]: I1008 09:31:58.434143 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvltr\" (UniqueName: \"kubernetes.io/projected/69e25545-4799-4347-8de3-66f12d03c6c7-kube-api-access-wvltr\") on node \"crc\" DevicePath \"\"" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.181924 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de506096114787c6e3789b6662ed0203cd9c8c060d399a76d8e372a6a947b23" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.182020 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-npwmq" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.595157 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e25545-4799-4347-8de3-66f12d03c6c7" path="/var/lib/kubelet/pods/69e25545-4799-4347-8de3-66f12d03c6c7/volumes" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.597738 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-crwzg"] Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.598306 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="extract-utilities" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.598410 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="extract-utilities" Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.598538 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="extract-content" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.598620 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="extract-content" Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.598709 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="extract-utilities" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.598791 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="extract-utilities" Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.598933 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="registry-server" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.599288 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="registry-server" Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.599376 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="registry-server" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.599452 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="registry-server" Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.599551 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e25545-4799-4347-8de3-66f12d03c6c7" containerName="container-00" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.599630 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e25545-4799-4347-8de3-66f12d03c6c7" containerName="container-00" Oct 08 09:31:59 crc kubenswrapper[4958]: E1008 09:31:59.599715 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="extract-content" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.599785 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="extract-content" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.600181 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da24d86-20dc-4037-8557-2cff4424736c" containerName="registry-server" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.600297 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e25545-4799-4347-8de3-66f12d03c6c7" containerName="container-00" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.600392 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c360b4-f0ca-4313-8a52-f64f663617f0" containerName="registry-server" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.601393 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.662980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/505b641d-efd9-4f75-8907-ed8d65132e90-host\") pod \"crc-debug-crwzg\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.663069 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpwm\" (UniqueName: \"kubernetes.io/projected/505b641d-efd9-4f75-8907-ed8d65132e90-kube-api-access-rlpwm\") pod \"crc-debug-crwzg\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.764217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/505b641d-efd9-4f75-8907-ed8d65132e90-host\") pod \"crc-debug-crwzg\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.764302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpwm\" (UniqueName: \"kubernetes.io/projected/505b641d-efd9-4f75-8907-ed8d65132e90-kube-api-access-rlpwm\") pod \"crc-debug-crwzg\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.764314 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/505b641d-efd9-4f75-8907-ed8d65132e90-host\") pod \"crc-debug-crwzg\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.792240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpwm\" (UniqueName: \"kubernetes.io/projected/505b641d-efd9-4f75-8907-ed8d65132e90-kube-api-access-rlpwm\") pod \"crc-debug-crwzg\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: I1008 09:31:59.920500 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:31:59 crc kubenswrapper[4958]: W1008 09:31:59.949080 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod505b641d_efd9_4f75_8907_ed8d65132e90.slice/crio-655cb9200848eb1948a162f9c8e9b7b6a7b23c91a5c268e1b929f955ab1578f0 WatchSource:0}: Error finding container 655cb9200848eb1948a162f9c8e9b7b6a7b23c91a5c268e1b929f955ab1578f0: Status 404 returned error can't find the container with id 655cb9200848eb1948a162f9c8e9b7b6a7b23c91a5c268e1b929f955ab1578f0 Oct 08 09:32:00 crc kubenswrapper[4958]: I1008 09:32:00.192323 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" event={"ID":"505b641d-efd9-4f75-8907-ed8d65132e90","Type":"ContainerStarted","Data":"655cb9200848eb1948a162f9c8e9b7b6a7b23c91a5c268e1b929f955ab1578f0"} Oct 08 09:32:01 crc kubenswrapper[4958]: I1008 09:32:01.218422 4958 generic.go:334] "Generic (PLEG): container finished" podID="505b641d-efd9-4f75-8907-ed8d65132e90" containerID="3d138dbf4b21be7785455b810c55ab3c47fd5a59d20e83018cd31f3db2aad32d" exitCode=0 Oct 08 09:32:01 crc kubenswrapper[4958]: I1008 09:32:01.218742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" event={"ID":"505b641d-efd9-4f75-8907-ed8d65132e90","Type":"ContainerDied","Data":"3d138dbf4b21be7785455b810c55ab3c47fd5a59d20e83018cd31f3db2aad32d"} Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.362631 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.434613 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/505b641d-efd9-4f75-8907-ed8d65132e90-host\") pod \"505b641d-efd9-4f75-8907-ed8d65132e90\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.434834 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlpwm\" (UniqueName: \"kubernetes.io/projected/505b641d-efd9-4f75-8907-ed8d65132e90-kube-api-access-rlpwm\") pod \"505b641d-efd9-4f75-8907-ed8d65132e90\" (UID: \"505b641d-efd9-4f75-8907-ed8d65132e90\") " Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.437410 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/505b641d-efd9-4f75-8907-ed8d65132e90-host" (OuterVolumeSpecName: "host") pod "505b641d-efd9-4f75-8907-ed8d65132e90" (UID: "505b641d-efd9-4f75-8907-ed8d65132e90"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.457189 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505b641d-efd9-4f75-8907-ed8d65132e90-kube-api-access-rlpwm" (OuterVolumeSpecName: "kube-api-access-rlpwm") pod "505b641d-efd9-4f75-8907-ed8d65132e90" (UID: "505b641d-efd9-4f75-8907-ed8d65132e90"). InnerVolumeSpecName "kube-api-access-rlpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.539325 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlpwm\" (UniqueName: \"kubernetes.io/projected/505b641d-efd9-4f75-8907-ed8d65132e90-kube-api-access-rlpwm\") on node \"crc\" DevicePath \"\"" Oct 08 09:32:02 crc kubenswrapper[4958]: I1008 09:32:02.539356 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/505b641d-efd9-4f75-8907-ed8d65132e90-host\") on node \"crc\" DevicePath \"\"" Oct 08 09:32:03 crc kubenswrapper[4958]: I1008 09:32:03.250340 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" event={"ID":"505b641d-efd9-4f75-8907-ed8d65132e90","Type":"ContainerDied","Data":"655cb9200848eb1948a162f9c8e9b7b6a7b23c91a5c268e1b929f955ab1578f0"} Oct 08 09:32:03 crc kubenswrapper[4958]: I1008 09:32:03.250381 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="655cb9200848eb1948a162f9c8e9b7b6a7b23c91a5c268e1b929f955ab1578f0" Oct 08 09:32:03 crc kubenswrapper[4958]: I1008 09:32:03.250868 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-crwzg" Oct 08 09:32:12 crc kubenswrapper[4958]: I1008 09:32:12.261308 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-crwzg"] Oct 08 09:32:12 crc kubenswrapper[4958]: I1008 09:32:12.268628 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-crwzg"] Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.478071 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-k2v2g"] Oct 08 09:32:13 crc kubenswrapper[4958]: E1008 09:32:13.479075 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505b641d-efd9-4f75-8907-ed8d65132e90" containerName="container-00" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.479098 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="505b641d-efd9-4f75-8907-ed8d65132e90" containerName="container-00" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.479466 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="505b641d-efd9-4f75-8907-ed8d65132e90" containerName="container-00" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.480827 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.602673 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505b641d-efd9-4f75-8907-ed8d65132e90" path="/var/lib/kubelet/pods/505b641d-efd9-4f75-8907-ed8d65132e90/volumes" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.607396 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs582\" (UniqueName: \"kubernetes.io/projected/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-kube-api-access-hs582\") pod \"crc-debug-k2v2g\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.607596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-host\") pod \"crc-debug-k2v2g\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.710203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs582\" (UniqueName: \"kubernetes.io/projected/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-kube-api-access-hs582\") pod \"crc-debug-k2v2g\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.710423 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-host\") pod \"crc-debug-k2v2g\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.710569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-host\") pod \"crc-debug-k2v2g\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.744012 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs582\" (UniqueName: \"kubernetes.io/projected/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-kube-api-access-hs582\") pod \"crc-debug-k2v2g\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: I1008 09:32:13.804907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:13 crc kubenswrapper[4958]: W1008 09:32:13.856569 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd00e885d_c3db_4ed9_96ea_ff47c7f5f496.slice/crio-a8f47d72cb0d4d02f2843eb657ad457741c9a068da108a8c7e99a9ef2ae0e7a3 WatchSource:0}: Error finding container a8f47d72cb0d4d02f2843eb657ad457741c9a068da108a8c7e99a9ef2ae0e7a3: Status 404 returned error can't find the container with id a8f47d72cb0d4d02f2843eb657ad457741c9a068da108a8c7e99a9ef2ae0e7a3 Oct 08 09:32:14 crc kubenswrapper[4958]: I1008 09:32:14.378229 4958 generic.go:334] "Generic (PLEG): container finished" podID="d00e885d-c3db-4ed9-96ea-ff47c7f5f496" containerID="de1bcca69925ffadbae5337b65806823d1e44ec2f5b3b7c4eb3548474ce41f0f" exitCode=0 Oct 08 09:32:14 crc kubenswrapper[4958]: I1008 09:32:14.378381 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" event={"ID":"d00e885d-c3db-4ed9-96ea-ff47c7f5f496","Type":"ContainerDied","Data":"de1bcca69925ffadbae5337b65806823d1e44ec2f5b3b7c4eb3548474ce41f0f"} Oct 08 09:32:14 crc kubenswrapper[4958]: I1008 09:32:14.378512 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" event={"ID":"d00e885d-c3db-4ed9-96ea-ff47c7f5f496","Type":"ContainerStarted","Data":"a8f47d72cb0d4d02f2843eb657ad457741c9a068da108a8c7e99a9ef2ae0e7a3"} Oct 08 09:32:14 crc kubenswrapper[4958]: I1008 09:32:14.431003 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-k2v2g"] Oct 08 09:32:14 crc kubenswrapper[4958]: I1008 09:32:14.448041 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzk4j/crc-debug-k2v2g"] Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.505476 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.658921 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs582\" (UniqueName: \"kubernetes.io/projected/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-kube-api-access-hs582\") pod \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.659136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-host\") pod \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\" (UID: \"d00e885d-c3db-4ed9-96ea-ff47c7f5f496\") " Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.659933 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-host" (OuterVolumeSpecName: "host") pod "d00e885d-c3db-4ed9-96ea-ff47c7f5f496" (UID: "d00e885d-c3db-4ed9-96ea-ff47c7f5f496"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.674642 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-kube-api-access-hs582" (OuterVolumeSpecName: "kube-api-access-hs582") pod "d00e885d-c3db-4ed9-96ea-ff47c7f5f496" (UID: "d00e885d-c3db-4ed9-96ea-ff47c7f5f496"). InnerVolumeSpecName "kube-api-access-hs582". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.761716 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs582\" (UniqueName: \"kubernetes.io/projected/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-kube-api-access-hs582\") on node \"crc\" DevicePath \"\"" Oct 08 09:32:15 crc kubenswrapper[4958]: I1008 09:32:15.761754 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d00e885d-c3db-4ed9-96ea-ff47c7f5f496-host\") on node \"crc\" DevicePath \"\"" Oct 08 09:32:16 crc kubenswrapper[4958]: I1008 09:32:16.401790 4958 scope.go:117] "RemoveContainer" containerID="de1bcca69925ffadbae5337b65806823d1e44ec2f5b3b7c4eb3548474ce41f0f" Oct 08 09:32:16 crc kubenswrapper[4958]: I1008 09:32:16.401805 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/crc-debug-k2v2g" Oct 08 09:32:16 crc kubenswrapper[4958]: I1008 09:32:16.935816 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/util/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.099930 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/pull/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.111074 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/util/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.141268 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/pull/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.261714 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/util/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.310407 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/extract/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.310642 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36122a92e6b12339df3c30325af8b6d046b31b95ba79e8e607770040959j6fn_c8e5b8dc-c42b-4e05-a9e3-ce4564b36035/pull/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.486584 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f56ff694-6fjfx_abbaa69c-2318-4087-a167-0bbe69928971/kube-rbac-proxy/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.591454 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00e885d-c3db-4ed9-96ea-ff47c7f5f496" path="/var/lib/kubelet/pods/d00e885d-c3db-4ed9-96ea-ff47c7f5f496/volumes" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.629346 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-84bd8f6848-zh97x_37de4c94-71ef-4563-915b-468370179903/kube-rbac-proxy/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.639841 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f56ff694-6fjfx_abbaa69c-2318-4087-a167-0bbe69928971/manager/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.766139 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-84bd8f6848-zh97x_37de4c94-71ef-4563-915b-468370179903/manager/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.825528 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-58d86cd59d-f8h2l_ac064bd6-8d20-4224-b54f-e074bff95072/manager/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.861566 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-58d86cd59d-f8h2l_ac064bd6-8d20-4224-b54f-e074bff95072/kube-rbac-proxy/0.log" Oct 08 09:32:17 crc kubenswrapper[4958]: I1008 09:32:17.988517 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-fd648f65-7l7c8_84708733-8897-4752-8533-5463ce01d265/kube-rbac-proxy/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.125160 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-fd648f65-7l7c8_84708733-8897-4752-8533-5463ce01d265/manager/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.218129 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-7ccfc8cf49-wmvbd_8890b7c7-8aba-485e-85db-ee154714c358/kube-rbac-proxy/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.287477 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b477879bc-l8tj8_d6b4df20-5cc4-49a7-b124-ac88e068f9a0/kube-rbac-proxy/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.312939 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-7ccfc8cf49-wmvbd_8890b7c7-8aba-485e-85db-ee154714c358/manager/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.410711 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b477879bc-l8tj8_d6b4df20-5cc4-49a7-b124-ac88e068f9a0/manager/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.539486 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-84788b6bc5-6pggm_28d81d52-ba79-4c62-95a7-f5a1e48b8dda/kube-rbac-proxy/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.742016 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5467f8988c-wklfs_351db139-bb78-4975-a6c1-ceb4904347f0/kube-rbac-proxy/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.742745 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5467f8988c-wklfs_351db139-bb78-4975-a6c1-ceb4904347f0/manager/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.747302 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-84788b6bc5-6pggm_28d81d52-ba79-4c62-95a7-f5a1e48b8dda/manager/0.log" Oct 08 09:32:18 crc kubenswrapper[4958]: I1008 09:32:18.962837 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5b84cc7657-v9mbt_60681278-f71a-4ec0-a572-e6c05783791c/kube-rbac-proxy/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.040843 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5b84cc7657-v9mbt_60681278-f71a-4ec0-a572-e6c05783791c/manager/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.138926 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cb48dbc-qftvm_e503c39d-8eed-4db8-ad49-9a78f7c2bfa2/kube-rbac-proxy/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.156404 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cb48dbc-qftvm_e503c39d-8eed-4db8-ad49-9a78f7c2bfa2/manager/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.208987 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-d6c9dc5bc-fhpnh_0443a624-6fd7-4b74-8e9c-7a1851459790/kube-rbac-proxy/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.380753 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-d6c9dc5bc-fhpnh_0443a624-6fd7-4b74-8e9c-7a1851459790/manager/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.397508 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-69b956fbf6-zc5vg_1f44d497-1eb5-40cd-9026-30d623318705/kube-rbac-proxy/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.475607 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-69b956fbf6-zc5vg_1f44d497-1eb5-40cd-9026-30d623318705/manager/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.576447 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c9b57c67-f58xh_b60c40ec-ea4e-445c-8561-859cd7cd94de/kube-rbac-proxy/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.777195 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c9b57c67-f58xh_b60c40ec-ea4e-445c-8561-859cd7cd94de/manager/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.876227 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f59f9d8-xbgdv_5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92/kube-rbac-proxy/0.log" Oct 08 09:32:19 crc kubenswrapper[4958]: I1008 09:32:19.943117 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f59f9d8-xbgdv_5fa5a7fd-cee3-4bc6-8908-f7c403ed7f92/manager/0.log" Oct 08 09:32:20 crc kubenswrapper[4958]: I1008 09:32:20.010437 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6875c66686z426p_be960fcf-8121-4145-8548-a1a46dc9f8bb/kube-rbac-proxy/0.log" Oct 08 09:32:20 crc kubenswrapper[4958]: I1008 09:32:20.053695 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6875c66686z426p_be960fcf-8121-4145-8548-a1a46dc9f8bb/manager/0.log" Oct 08 09:32:20 crc kubenswrapper[4958]: I1008 09:32:20.139526 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bfd56c677-mtx9b_06a6179d-6e3a-4817-8403-7b6db9d933c8/kube-rbac-proxy/0.log" Oct 08 09:32:20 crc kubenswrapper[4958]: I1008 09:32:20.290608 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-b6d857f89-4vq4b_d19e23b3-612a-41d0-9bf1-d3dc773d692d/kube-rbac-proxy/0.log" Oct 08 09:32:20 crc kubenswrapper[4958]: I1008 09:32:20.428665 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-b6d857f89-4vq4b_d19e23b3-612a-41d0-9bf1-d3dc773d692d/operator/0.log" Oct 08 09:32:20 crc kubenswrapper[4958]: I1008 09:32:20.764521 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-n5wdt_3db691c9-ed86-4d49-9508-deb29b0e26ba/registry-server/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.014697 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54d485fd9-98gw2_b71b7f35-8fba-4da1-83f9-4b7c08b15990/kube-rbac-proxy/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.168878 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-66f6d6849b-j85xs_48aee074-1139-426c-a04c-9f65ccb3ccde/kube-rbac-proxy/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.246857 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54d485fd9-98gw2_b71b7f35-8fba-4da1-83f9-4b7c08b15990/manager/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.279176 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-66f6d6849b-j85xs_48aee074-1139-426c-a04c-9f65ccb3ccde/manager/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.474803 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-qq7r8_8407a2bd-3444-4331-840d-c9729358f57a/operator/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.565105 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-76d5577b-pgq5t_8140f9b7-f1fe-4151-a453-ff7990ee085b/kube-rbac-proxy/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.608584 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-76d5577b-pgq5t_8140f9b7-f1fe-4151-a453-ff7990ee085b/manager/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.738796 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-f589c7597-p9q6v_88ec1ec0-d81b-4b01-a299-7980c8fbb961/kube-rbac-proxy/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.945094 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6bb6dcddc-lcqsc_fbdb6830-b324-47f5-8022-241051600f27/manager/0.log" Oct 08 09:32:21 crc kubenswrapper[4958]: I1008 09:32:21.948297 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6bb6dcddc-lcqsc_fbdb6830-b324-47f5-8022-241051600f27/kube-rbac-proxy/0.log" Oct 08 09:32:22 crc kubenswrapper[4958]: I1008 09:32:22.152503 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d98cc5575-phmxn_0ce4d96e-f3ed-48b7-b871-e8113fb727a2/kube-rbac-proxy/0.log" Oct 08 09:32:22 crc kubenswrapper[4958]: I1008 09:32:22.221784 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-f589c7597-p9q6v_88ec1ec0-d81b-4b01-a299-7980c8fbb961/manager/0.log" Oct 08 09:32:22 crc kubenswrapper[4958]: I1008 09:32:22.243712 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d98cc5575-phmxn_0ce4d96e-f3ed-48b7-b871-e8113fb727a2/manager/0.log" Oct 08 09:32:22 crc kubenswrapper[4958]: I1008 09:32:22.844460 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bfd56c677-mtx9b_06a6179d-6e3a-4817-8403-7b6db9d933c8/manager/0.log" Oct 08 09:32:38 crc kubenswrapper[4958]: I1008 09:32:38.435661 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9dwhm_582542ee-c2ad-4058-830e-dda091d4507a/control-plane-machine-set-operator/0.log" Oct 08 09:32:38 crc kubenswrapper[4958]: I1008 09:32:38.559382 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-clxh5_6c04fc7e-cc6d-4d08-99cc-752b31cb5110/machine-api-operator/0.log" Oct 08 09:32:38 crc kubenswrapper[4958]: I1008 09:32:38.590751 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-clxh5_6c04fc7e-cc6d-4d08-99cc-752b31cb5110/kube-rbac-proxy/0.log" Oct 08 09:32:52 crc kubenswrapper[4958]: I1008 09:32:52.825034 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-7d4cc89fcb-9z5df_0ccba3bb-cf15-4c0d-95db-7bc15614bdc8/cert-manager-controller/0.log" Oct 08 09:32:52 crc kubenswrapper[4958]: I1008 09:32:52.973912 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7d9f95dbf-68v2w_b4f54a9b-c106-4a5c-b785-2097ad0d263a/cert-manager-cainjector/0.log" Oct 08 09:32:53 crc kubenswrapper[4958]: I1008 09:32:53.000076 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-d969966f-6qg2f_b8278bd4-b875-4f1c-bfd1-48547c6358ac/cert-manager-webhook/0.log" Oct 08 09:33:06 crc kubenswrapper[4958]: I1008 09:33:06.925678 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-f4z6n_b2433751-371a-4030-94ff-aff641121a0a/nmstate-console-plugin/0.log" Oct 08 09:33:07 crc kubenswrapper[4958]: I1008 09:33:07.100330 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-t28hq_87cfba69-24de-4e63-8132-80091c6cdd43/nmstate-handler/0.log" Oct 08 09:33:07 crc kubenswrapper[4958]: I1008 09:33:07.183437 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-ft9jc_0e3927d1-6c4c-4758-ba2d-455aea8dd388/nmstate-metrics/0.log" Oct 08 09:33:07 crc kubenswrapper[4958]: I1008 09:33:07.184039 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-ft9jc_0e3927d1-6c4c-4758-ba2d-455aea8dd388/kube-rbac-proxy/0.log" Oct 08 09:33:07 crc kubenswrapper[4958]: I1008 09:33:07.358403 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-sdkfc_60067db5-d91c-42cb-b25e-13cc170a1a14/nmstate-operator/0.log" Oct 08 09:33:07 crc kubenswrapper[4958]: I1008 09:33:07.400512 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-kxhxj_d38d6806-1278-4fa5-9a83-2a8adae79a2c/nmstate-webhook/0.log" Oct 08 09:33:24 crc kubenswrapper[4958]: I1008 09:33:24.528818 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-gtdwt_a010ae87-52ed-4aaf-b47d-75bd4f76803e/kube-rbac-proxy/0.log" Oct 08 09:33:24 crc kubenswrapper[4958]: I1008 09:33:24.787254 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-frr-files/0.log" Oct 08 09:33:24 crc kubenswrapper[4958]: I1008 09:33:24.918808 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-frr-files/0.log" Oct 08 09:33:24 crc kubenswrapper[4958]: I1008 09:33:24.998736 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-metrics/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.034076 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-reloader/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.051331 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-gtdwt_a010ae87-52ed-4aaf-b47d-75bd4f76803e/controller/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.101700 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-reloader/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.303898 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-reloader/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.349262 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-frr-files/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.352521 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-metrics/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.387254 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-metrics/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.573495 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-frr-files/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.597308 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-reloader/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.605601 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/controller/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.643103 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/cp-metrics/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.768738 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/frr-metrics/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.827702 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/kube-rbac-proxy/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.879428 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/kube-rbac-proxy-frr/0.log" Oct 08 09:33:25 crc kubenswrapper[4958]: I1008 09:33:25.999322 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/reloader/0.log" Oct 08 09:33:26 crc kubenswrapper[4958]: I1008 09:33:26.071365 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-5jj6g_8c4e8505-657a-4d15-b0d9-a2b7a2cd6b7a/frr-k8s-webhook-server/0.log" Oct 08 09:33:26 crc kubenswrapper[4958]: I1008 09:33:26.298539 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c464c978f-r4jsv_de7476e7-bd6f-44c3-a831-2143114d89be/manager/0.log" Oct 08 09:33:26 crc kubenswrapper[4958]: I1008 09:33:26.521872 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-b4xgg_d7953bb6-201d-4470-96d6-f1ffc75ad4a9/kube-rbac-proxy/0.log" Oct 08 09:33:26 crc kubenswrapper[4958]: I1008 09:33:26.569757 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-67bdd4c657-nfh8t_28bc683e-4fe9-4276-96c4-4b63e96f368d/webhook-server/0.log" Oct 08 09:33:27 crc kubenswrapper[4958]: I1008 09:33:27.696858 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-b4xgg_d7953bb6-201d-4470-96d6-f1ffc75ad4a9/speaker/0.log" Oct 08 09:33:29 crc kubenswrapper[4958]: I1008 09:33:29.084055 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-67rh4_d47b4d36-5b8e-4da8-8896-a5bb3f88a473/frr/0.log" Oct 08 09:33:42 crc kubenswrapper[4958]: I1008 09:33:42.744140 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/util/0.log" Oct 08 09:33:42 crc kubenswrapper[4958]: I1008 09:33:42.989047 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/util/0.log" Oct 08 09:33:43 crc kubenswrapper[4958]: I1008 09:33:43.057180 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/pull/0.log" Oct 08 09:33:43 crc kubenswrapper[4958]: I1008 09:33:43.058854 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/pull/0.log" Oct 08 09:33:43 crc kubenswrapper[4958]: I1008 09:33:43.246208 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/util/0.log" Oct 08 09:33:43 crc kubenswrapper[4958]: I1008 09:33:43.279117 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/pull/0.log" Oct 08 09:33:43 crc kubenswrapper[4958]: I1008 09:33:43.280715 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69j9xjh_3c102b8e-9719-43ed-a225-bd3425249d4e/extract/0.log" Oct 08 09:33:43 crc kubenswrapper[4958]: I1008 09:33:43.872140 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/util/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.053052 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/pull/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.062860 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/pull/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.063015 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/util/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.226685 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/util/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.260563 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/pull/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.293572 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d22tlrj_d06f1d78-b394-4063-9af1-a53ec1eaae2b/extract/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.465121 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/util/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.575716 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/util/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.599248 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/pull/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.666896 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/pull/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.769785 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/util/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.772231 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/extract/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.774639 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d9pp9x_a06b7639-7a8f-4e12-823d-d79e87e8c8ca/pull/0.log" Oct 08 09:33:44 crc kubenswrapper[4958]: I1008 09:33:44.924893 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/extract-utilities/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.106021 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/extract-utilities/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.145111 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/extract-content/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.148673 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/extract-content/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.347359 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/extract-utilities/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.354508 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/extract-content/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.362783 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/extract-utilities/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.552719 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/extract-content/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.577475 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/extract-utilities/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.608030 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/extract-content/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.695678 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc56k_1264ef36-ebe0-4e83-a46a-f43616fdc1c4/registry-server/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.767584 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/extract-content/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.772544 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/extract-utilities/0.log" Oct 08 09:33:45 crc kubenswrapper[4958]: I1008 09:33:45.870491 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/util/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.118201 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/util/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.145883 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/pull/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.179910 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/pull/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.357050 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/util/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.385556 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/pull/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.397774 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cnbpr8_71d1d831-4efc-496d-8876-419f604cd0c8/extract/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.557763 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-zbtmm_8eb7ce86-513c-4373-b20a-1eb9eb0dd65d/marketplace-operator/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.638185 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/extract-utilities/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.822508 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kk8np_5faf3f82-75c1-415c-bf3e-9c9e7340f3aa/registry-server/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.839107 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/extract-utilities/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.845795 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/extract-content/0.log" Oct 08 09:33:46 crc kubenswrapper[4958]: I1008 09:33:46.882207 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/extract-content/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.048541 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/extract-content/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.076806 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/extract-utilities/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.164080 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/extract-utilities/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.353572 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/extract-utilities/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.359191 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/extract-content/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.366981 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/extract-content/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.498300 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xz5ng_7d5615b0-6b66-4d00-89b5-93bd6aa32858/registry-server/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.589283 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/extract-utilities/0.log" Oct 08 09:33:47 crc kubenswrapper[4958]: I1008 09:33:47.632389 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/extract-content/0.log" Oct 08 09:33:48 crc kubenswrapper[4958]: I1008 09:33:48.883702 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j6jqg_59c0f741-b149-4fc0-b2de-405c5d2bc0db/registry-server/0.log" Oct 08 09:34:01 crc kubenswrapper[4958]: I1008 09:34:01.836211 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-f29ft_bccd6a58-4d99-48b2-8be1-a06433166b30/prometheus-operator/0.log" Oct 08 09:34:01 crc kubenswrapper[4958]: I1008 09:34:01.994274 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b988d98c-6229j_30fb4dfe-018c-4772-b527-7955cab889da/prometheus-operator-admission-webhook/0.log" Oct 08 09:34:02 crc kubenswrapper[4958]: I1008 09:34:02.064536 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b988d98c-s98xq_7b191752-8192-4f54-9b6e-f3027b9e1104/prometheus-operator-admission-webhook/0.log" Oct 08 09:34:02 crc kubenswrapper[4958]: I1008 09:34:02.192189 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-29nr5_1fe7fd1d-f173-4506-81a2-031a921210f7/operator/0.log" Oct 08 09:34:02 crc kubenswrapper[4958]: I1008 09:34:02.238820 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-2vdst_0151dc13-9ece-481a-9c20-158d3d50af3a/perses-operator/0.log" Oct 08 09:34:06 crc kubenswrapper[4958]: I1008 09:34:06.845293 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:34:06 crc kubenswrapper[4958]: I1008 09:34:06.846976 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:34:30 crc kubenswrapper[4958]: E1008 09:34:30.318933 4958 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.115:60784->38.102.83.115:46157: write tcp 38.102.83.115:60784->38.102.83.115:46157: write: broken pipe Oct 08 09:34:36 crc kubenswrapper[4958]: I1008 09:34:36.845071 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:34:36 crc kubenswrapper[4958]: I1008 09:34:36.846067 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:35:06 crc kubenswrapper[4958]: I1008 09:35:06.845478 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:35:06 crc kubenswrapper[4958]: I1008 09:35:06.846053 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:35:06 crc kubenswrapper[4958]: I1008 09:35:06.846117 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:35:06 crc kubenswrapper[4958]: I1008 09:35:06.847440 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94430232a247272208346ced4af30d22095fbb6942a04400ce4d86025bf1c385"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:35:06 crc kubenswrapper[4958]: I1008 09:35:06.847645 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://94430232a247272208346ced4af30d22095fbb6942a04400ce4d86025bf1c385" gracePeriod=600 Oct 08 09:35:07 crc kubenswrapper[4958]: I1008 09:35:07.486192 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="94430232a247272208346ced4af30d22095fbb6942a04400ce4d86025bf1c385" exitCode=0 Oct 08 09:35:07 crc kubenswrapper[4958]: I1008 09:35:07.486319 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"94430232a247272208346ced4af30d22095fbb6942a04400ce4d86025bf1c385"} Oct 08 09:35:07 crc kubenswrapper[4958]: I1008 09:35:07.487004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerStarted","Data":"9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe"} Oct 08 09:35:07 crc kubenswrapper[4958]: I1008 09:35:07.487039 4958 scope.go:117] "RemoveContainer" containerID="67966bc53080dd131f53098e8bf4e26215c1fa6bbaa42863a5b7746c28165607" Oct 08 09:35:50 crc kubenswrapper[4958]: I1008 09:35:50.780342 4958 scope.go:117] "RemoveContainer" containerID="44f70f66e3737d42cc2d83513195e2272a54f57e68bf17763abfb03e2bcc4c94" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.036216 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2f8j5"] Oct 08 09:35:59 crc kubenswrapper[4958]: E1008 09:35:59.037683 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00e885d-c3db-4ed9-96ea-ff47c7f5f496" containerName="container-00" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.037705 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00e885d-c3db-4ed9-96ea-ff47c7f5f496" containerName="container-00" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.038120 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00e885d-c3db-4ed9-96ea-ff47c7f5f496" containerName="container-00" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.041351 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.049331 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f8j5"] Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.162319 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-catalog-content\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.162382 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48h9\" (UniqueName: \"kubernetes.io/projected/88e2035c-5dab-4343-8364-c615ba8cec39-kube-api-access-s48h9\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.162517 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-utilities\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.264576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-utilities\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.265074 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-catalog-content\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.265216 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48h9\" (UniqueName: \"kubernetes.io/projected/88e2035c-5dab-4343-8364-c615ba8cec39-kube-api-access-s48h9\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.265630 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-utilities\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.265715 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-catalog-content\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.289724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48h9\" (UniqueName: \"kubernetes.io/projected/88e2035c-5dab-4343-8364-c615ba8cec39-kube-api-access-s48h9\") pod \"community-operators-2f8j5\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:35:59 crc kubenswrapper[4958]: I1008 09:35:59.377685 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:00 crc kubenswrapper[4958]: I1008 09:36:00.068838 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f8j5"] Oct 08 09:36:00 crc kubenswrapper[4958]: I1008 09:36:00.203095 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerStarted","Data":"9e4074b1a78b6ee4d145ccde8bb7b1f8f5c7b6204edde87b872fe0cbbc3de19a"} Oct 08 09:36:01 crc kubenswrapper[4958]: I1008 09:36:01.221713 4958 generic.go:334] "Generic (PLEG): container finished" podID="88e2035c-5dab-4343-8364-c615ba8cec39" containerID="f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996" exitCode=0 Oct 08 09:36:01 crc kubenswrapper[4958]: I1008 09:36:01.222080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerDied","Data":"f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996"} Oct 08 09:36:01 crc kubenswrapper[4958]: I1008 09:36:01.225686 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 08 09:36:03 crc kubenswrapper[4958]: I1008 09:36:03.257152 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerStarted","Data":"9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35"} Oct 08 09:36:04 crc kubenswrapper[4958]: I1008 09:36:04.275992 4958 generic.go:334] "Generic (PLEG): container finished" podID="88e2035c-5dab-4343-8364-c615ba8cec39" containerID="9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35" exitCode=0 Oct 08 09:36:04 crc kubenswrapper[4958]: I1008 09:36:04.276054 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerDied","Data":"9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35"} Oct 08 09:36:05 crc kubenswrapper[4958]: I1008 09:36:05.294377 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerStarted","Data":"e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f"} Oct 08 09:36:05 crc kubenswrapper[4958]: I1008 09:36:05.328399 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2f8j5" podStartSLOduration=3.845922336 podStartE2EDuration="7.328374288s" podCreationTimestamp="2025-10-08 09:35:58 +0000 UTC" firstStartedPulling="2025-10-08 09:36:01.225193576 +0000 UTC m=+10904.354886177" lastFinishedPulling="2025-10-08 09:36:04.707645528 +0000 UTC m=+10907.837338129" observedRunningTime="2025-10-08 09:36:05.322873318 +0000 UTC m=+10908.452565929" watchObservedRunningTime="2025-10-08 09:36:05.328374288 +0000 UTC m=+10908.458066899" Oct 08 09:36:09 crc kubenswrapper[4958]: I1008 09:36:09.378421 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:09 crc kubenswrapper[4958]: I1008 09:36:09.379282 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:09 crc kubenswrapper[4958]: I1008 09:36:09.460633 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:10 crc kubenswrapper[4958]: I1008 09:36:10.460617 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:10 crc kubenswrapper[4958]: I1008 09:36:10.538277 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f8j5"] Oct 08 09:36:12 crc kubenswrapper[4958]: I1008 09:36:12.399276 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2f8j5" podUID="88e2035c-5dab-4343-8364-c615ba8cec39" containerName="registry-server" containerID="cri-o://e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f" gracePeriod=2 Oct 08 09:36:12 crc kubenswrapper[4958]: I1008 09:36:12.918863 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.044271 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-utilities\") pod \"88e2035c-5dab-4343-8364-c615ba8cec39\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.044385 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-catalog-content\") pod \"88e2035c-5dab-4343-8364-c615ba8cec39\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.044721 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48h9\" (UniqueName: \"kubernetes.io/projected/88e2035c-5dab-4343-8364-c615ba8cec39-kube-api-access-s48h9\") pod \"88e2035c-5dab-4343-8364-c615ba8cec39\" (UID: \"88e2035c-5dab-4343-8364-c615ba8cec39\") " Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.045244 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-utilities" (OuterVolumeSpecName: "utilities") pod "88e2035c-5dab-4343-8364-c615ba8cec39" (UID: "88e2035c-5dab-4343-8364-c615ba8cec39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.045439 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-utilities\") on node \"crc\" DevicePath \"\"" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.061393 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e2035c-5dab-4343-8364-c615ba8cec39-kube-api-access-s48h9" (OuterVolumeSpecName: "kube-api-access-s48h9") pod "88e2035c-5dab-4343-8364-c615ba8cec39" (UID: "88e2035c-5dab-4343-8364-c615ba8cec39"). InnerVolumeSpecName "kube-api-access-s48h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.106013 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88e2035c-5dab-4343-8364-c615ba8cec39" (UID: "88e2035c-5dab-4343-8364-c615ba8cec39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.147833 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e2035c-5dab-4343-8364-c615ba8cec39-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.147881 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48h9\" (UniqueName: \"kubernetes.io/projected/88e2035c-5dab-4343-8364-c615ba8cec39-kube-api-access-s48h9\") on node \"crc\" DevicePath \"\"" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.413053 4958 generic.go:334] "Generic (PLEG): container finished" podID="88e2035c-5dab-4343-8364-c615ba8cec39" containerID="e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f" exitCode=0 Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.413115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerDied","Data":"e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f"} Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.413159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f8j5" event={"ID":"88e2035c-5dab-4343-8364-c615ba8cec39","Type":"ContainerDied","Data":"9e4074b1a78b6ee4d145ccde8bb7b1f8f5c7b6204edde87b872fe0cbbc3de19a"} Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.413190 4958 scope.go:117] "RemoveContainer" containerID="e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.413319 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f8j5" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.457802 4958 scope.go:117] "RemoveContainer" containerID="9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.485545 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f8j5"] Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.495430 4958 scope.go:117] "RemoveContainer" containerID="f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.496299 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2f8j5"] Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.592466 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e2035c-5dab-4343-8364-c615ba8cec39" path="/var/lib/kubelet/pods/88e2035c-5dab-4343-8364-c615ba8cec39/volumes" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.654733 4958 scope.go:117] "RemoveContainer" containerID="e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f" Oct 08 09:36:13 crc kubenswrapper[4958]: E1008 09:36:13.655701 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f\": container with ID starting with e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f not found: ID does not exist" containerID="e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.655744 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f"} err="failed to get container status \"e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f\": rpc error: code = NotFound desc = could not find container \"e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f\": container with ID starting with e1d06df207cac63d558ba63b367d9813c947309788db36e76ebf707b12899b3f not found: ID does not exist" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.655769 4958 scope.go:117] "RemoveContainer" containerID="9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35" Oct 08 09:36:13 crc kubenswrapper[4958]: E1008 09:36:13.656248 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35\": container with ID starting with 9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35 not found: ID does not exist" containerID="9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.656287 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35"} err="failed to get container status \"9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35\": rpc error: code = NotFound desc = could not find container \"9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35\": container with ID starting with 9d682ab2f19bce35cc67a4a9c207e00b82e47e53c05bd7e21c9a82b410379f35 not found: ID does not exist" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.656315 4958 scope.go:117] "RemoveContainer" containerID="f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996" Oct 08 09:36:13 crc kubenswrapper[4958]: E1008 09:36:13.656597 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996\": container with ID starting with f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996 not found: ID does not exist" containerID="f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996" Oct 08 09:36:13 crc kubenswrapper[4958]: I1008 09:36:13.656647 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996"} err="failed to get container status \"f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996\": rpc error: code = NotFound desc = could not find container \"f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996\": container with ID starting with f27d0bd7c45182f95189c6b34479a45661d6de975b901396373238f63a8b0996 not found: ID does not exist" Oct 08 09:36:37 crc kubenswrapper[4958]: I1008 09:36:37.720335 4958 generic.go:334] "Generic (PLEG): container finished" podID="c79bde84-3cd1-43fe-86bb-2de25276e513" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" exitCode=0 Oct 08 09:36:37 crc kubenswrapper[4958]: I1008 09:36:37.720439 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzk4j/must-gather-st8j9" event={"ID":"c79bde84-3cd1-43fe-86bb-2de25276e513","Type":"ContainerDied","Data":"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790"} Oct 08 09:36:37 crc kubenswrapper[4958]: I1008 09:36:37.721358 4958 scope.go:117] "RemoveContainer" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" Oct 08 09:36:38 crc kubenswrapper[4958]: I1008 09:36:38.140190 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tzk4j_must-gather-st8j9_c79bde84-3cd1-43fe-86bb-2de25276e513/gather/0.log" Oct 08 09:36:49 crc kubenswrapper[4958]: I1008 09:36:49.752749 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzk4j/must-gather-st8j9"] Oct 08 09:36:49 crc kubenswrapper[4958]: I1008 09:36:49.753603 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tzk4j/must-gather-st8j9" podUID="c79bde84-3cd1-43fe-86bb-2de25276e513" containerName="copy" containerID="cri-o://f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" gracePeriod=2 Oct 08 09:36:49 crc kubenswrapper[4958]: I1008 09:36:49.763562 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzk4j/must-gather-st8j9"] Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.716082 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tzk4j_must-gather-st8j9_c79bde84-3cd1-43fe-86bb-2de25276e513/copy/0.log" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.718456 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.869206 4958 scope.go:117] "RemoveContainer" containerID="f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.878098 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2mm\" (UniqueName: \"kubernetes.io/projected/c79bde84-3cd1-43fe-86bb-2de25276e513-kube-api-access-jr2mm\") pod \"c79bde84-3cd1-43fe-86bb-2de25276e513\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.878322 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c79bde84-3cd1-43fe-86bb-2de25276e513-must-gather-output\") pod \"c79bde84-3cd1-43fe-86bb-2de25276e513\" (UID: \"c79bde84-3cd1-43fe-86bb-2de25276e513\") " Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.884017 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79bde84-3cd1-43fe-86bb-2de25276e513-kube-api-access-jr2mm" (OuterVolumeSpecName: "kube-api-access-jr2mm") pod "c79bde84-3cd1-43fe-86bb-2de25276e513" (UID: "c79bde84-3cd1-43fe-86bb-2de25276e513"). InnerVolumeSpecName "kube-api-access-jr2mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.893818 4958 generic.go:334] "Generic (PLEG): container finished" podID="c79bde84-3cd1-43fe-86bb-2de25276e513" containerID="f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" exitCode=143 Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.893877 4958 scope.go:117] "RemoveContainer" containerID="f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.893930 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzk4j/must-gather-st8j9" Oct 08 09:36:50 crc kubenswrapper[4958]: E1008 09:36:50.964715 4958 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_copy_must-gather-st8j9_openshift-must-gather-tzk4j_c79bde84-3cd1-43fe-86bb-2de25276e513_0 in pod sandbox 2c8a0df466c272e58a7b2a3f83ffa50b46926c3cddb295c1051c5ce9c86f92c2 from index: no such id: 'f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82'" containerID="f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.964890 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82"} err="rpc error: code = Unknown desc = failed to delete container k8s_copy_must-gather-st8j9_openshift-must-gather-tzk4j_c79bde84-3cd1-43fe-86bb-2de25276e513_0 in pod sandbox 2c8a0df466c272e58a7b2a3f83ffa50b46926c3cddb295c1051c5ce9c86f92c2 from index: no such id: 'f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82'" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.965037 4958 scope.go:117] "RemoveContainer" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.965224 4958 scope.go:117] "RemoveContainer" containerID="ad119b49c907458e9cdb42b8a72364567644f65d05d47a0736743297e0f9ce3e" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.980916 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr2mm\" (UniqueName: \"kubernetes.io/projected/c79bde84-3cd1-43fe-86bb-2de25276e513-kube-api-access-jr2mm\") on node \"crc\" DevicePath \"\"" Oct 08 09:36:50 crc kubenswrapper[4958]: I1008 09:36:50.985759 4958 scope.go:117] "RemoveContainer" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.134745 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79bde84-3cd1-43fe-86bb-2de25276e513-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c79bde84-3cd1-43fe-86bb-2de25276e513" (UID: "c79bde84-3cd1-43fe-86bb-2de25276e513"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 08 09:36:51 crc kubenswrapper[4958]: E1008 09:36:51.163529 4958 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_gather_must-gather-st8j9_openshift-must-gather-tzk4j_c79bde84-3cd1-43fe-86bb-2de25276e513_0 in pod sandbox 2c8a0df466c272e58a7b2a3f83ffa50b46926c3cddb295c1051c5ce9c86f92c2: identifier is not a container" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.163576 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790"} err="rpc error: code = Unknown desc = failed to delete container k8s_gather_must-gather-st8j9_openshift-must-gather-tzk4j_c79bde84-3cd1-43fe-86bb-2de25276e513_0 in pod sandbox 2c8a0df466c272e58a7b2a3f83ffa50b46926c3cddb295c1051c5ce9c86f92c2: identifier is not a container" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.163970 4958 scope.go:117] "RemoveContainer" containerID="f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.164234 4958 scope.go:117] "RemoveContainer" containerID="e184d43a8c09bde10bf38b0b061e332fddfc3f47a80570ffbd384095f8a277f4" Oct 08 09:36:51 crc kubenswrapper[4958]: E1008 09:36:51.164433 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82\": container with ID starting with f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82 not found: ID does not exist" containerID="f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.164466 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82"} err="failed to get container status \"f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82\": rpc error: code = NotFound desc = could not find container \"f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82\": container with ID starting with f4c00c31f1750afef6b4c10b7343304fa27c3aab0d3bffd6b4aa9d530d579e82 not found: ID does not exist" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.164482 4958 scope.go:117] "RemoveContainer" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" Oct 08 09:36:51 crc kubenswrapper[4958]: E1008 09:36:51.164836 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790\": container with ID starting with edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790 not found: ID does not exist" containerID="edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.164913 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790"} err="failed to get container status \"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790\": rpc error: code = NotFound desc = could not find container \"edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790\": container with ID starting with edcb99331970c9b882b52b3ee8ddf5379d5ff7836cfb642ae7dc254fd6a7c790 not found: ID does not exist" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.184041 4958 scope.go:117] "RemoveContainer" containerID="b9f29b8c51937a1c96f4ff047f4a482c9d0448a4e38c32e6c516ee32fd45df55" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.184532 4958 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c79bde84-3cd1-43fe-86bb-2de25276e513-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 08 09:36:51 crc kubenswrapper[4958]: I1008 09:36:51.591330 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79bde84-3cd1-43fe-86bb-2de25276e513" path="/var/lib/kubelet/pods/c79bde84-3cd1-43fe-86bb-2de25276e513/volumes" Oct 08 09:37:36 crc kubenswrapper[4958]: I1008 09:37:36.845636 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:37:36 crc kubenswrapper[4958]: I1008 09:37:36.846242 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:38:06 crc kubenswrapper[4958]: I1008 09:38:06.845262 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:38:06 crc kubenswrapper[4958]: I1008 09:38:06.845804 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:38:36 crc kubenswrapper[4958]: I1008 09:38:36.845070 4958 patch_prober.go:28] interesting pod/machine-config-daemon-qd84r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 08 09:38:36 crc kubenswrapper[4958]: I1008 09:38:36.846003 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 08 09:38:36 crc kubenswrapper[4958]: I1008 09:38:36.846101 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" Oct 08 09:38:36 crc kubenswrapper[4958]: I1008 09:38:36.847412 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe"} pod="openshift-machine-config-operator/machine-config-daemon-qd84r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 08 09:38:36 crc kubenswrapper[4958]: I1008 09:38:36.847527 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerName="machine-config-daemon" containerID="cri-o://9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe" gracePeriod=600 Oct 08 09:38:36 crc kubenswrapper[4958]: E1008 09:38:36.983256 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:38:37 crc kubenswrapper[4958]: I1008 09:38:37.329620 4958 generic.go:334] "Generic (PLEG): container finished" podID="c9e6284b-565d-4277-9ebf-62d3623b249b" containerID="9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe" exitCode=0 Oct 08 09:38:37 crc kubenswrapper[4958]: I1008 09:38:37.330266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" event={"ID":"c9e6284b-565d-4277-9ebf-62d3623b249b","Type":"ContainerDied","Data":"9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe"} Oct 08 09:38:37 crc kubenswrapper[4958]: I1008 09:38:37.330402 4958 scope.go:117] "RemoveContainer" containerID="94430232a247272208346ced4af30d22095fbb6942a04400ce4d86025bf1c385" Oct 08 09:38:37 crc kubenswrapper[4958]: I1008 09:38:37.331396 4958 scope.go:117] "RemoveContainer" containerID="9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe" Oct 08 09:38:37 crc kubenswrapper[4958]: E1008 09:38:37.331904 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b" Oct 08 09:38:51 crc kubenswrapper[4958]: I1008 09:38:51.326433 4958 scope.go:117] "RemoveContainer" containerID="3d138dbf4b21be7785455b810c55ab3c47fd5a59d20e83018cd31f3db2aad32d" Oct 08 09:38:51 crc kubenswrapper[4958]: I1008 09:38:51.577518 4958 scope.go:117] "RemoveContainer" containerID="9551517b3eaf1be688c4671d2dad51c612ced5eb4dc76147f600ff3925793bfe" Oct 08 09:38:51 crc kubenswrapper[4958]: E1008 09:38:51.578373 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qd84r_openshift-machine-config-operator(c9e6284b-565d-4277-9ebf-62d3623b249b)\"" pod="openshift-machine-config-operator/machine-config-daemon-qd84r" podUID="c9e6284b-565d-4277-9ebf-62d3623b249b"